Popping in for this. There's very little out there that is 'objectively' best in every case - what there is instead is a cost associated with taking different approaches. Sure, in many common case uses, it feels like you don't pay the cost of using the algorithm or data structure with the ideal worst-case time complexity. When dealing with problem spaces containing ~1000 things with generic uses with common read, write, and update, it feels just fine to use whatever your language of choice decides to use for a map or associative array, or whatever database system is handy. With a simple enough problem, even the less than ideal solution is close enough with fast hardware that doing something slightly less than optimally doesn't seem to matter. However, this goes away when you deal with large problem spaces and more specialized projects. In these circumstances, your one-size-fits-all solutions built into languages don't always apply. Suddenly building a hashmap for an enormous dataset isn't realistic. That query that worked well enough over 1000 records doesn't scale to the millions. Maybe you lose so much time in data I/O that the fact you are saving processor speed with your default algorithm doesn't matter. This becomes an even bigger deal when you throw security into the mix. Languages are designed to be functional, and their tutorials and documentation don't make it obvious where you might be incurring security risk if you aren't aware what is going on in the background. Something that looks innocuous to someone casually building a system could be a risk to the entire system. This is the exact sort of thing that causes projects to fail in the real world all the time. This sort of laziness snowballs itself into a system that falls well short of performance expectations and becomes a burden to those trying to seek value from it. Furthermore, reliance on built-in behavior tends to lead to sloppy design; the meat of the system isn't properly modular, preventing the system from growing with the technology around it.In general, you want on a real project to really on a deep level truly understand exactly what your system is doing, or you won't build a system that really solves the target problem effectively. To give a specific example, my office handles large volumes of research data. In a simple out-of-the-box enterdata system (MSSQL), querying our larger datasets takes multiple hours, if not days. This is because it's running a cursor query across hundreds of millions of records. Running the same query in an OLAP database takes roughly 1/10th to 1/100th of the time. Pretty fast, but still noticeable - you'll have to get a cup of coffee before you see results. Abandoning relational database systems entirely and scanning flat files directly (using C code) completes the query in less than 1 second. Theoretically, even using MSSQL, you have a functioning solution, but it's very hard to use and doesn't scale if you have more than one person querying the data, and it's hard to adjust your query because your iteration time is over multiple hours or days. Even using the OLAP database, it's painful to narrow details. Using the manual C solution, our researchers can easily refine their queries to get meaningful results, while not being at all limited by the system they are using.