Dealing with performance is a tedious task all programmers face. It can be particularly exciting because it allows programmers to show their algorithm skills. The drawback is that it can easily lead to over-engineering or time wasting on simple tasks.

Truth is that most of us don't work at Google neither wrote compilers. I mean you probably don't have to handle trillions of requests or achieve microsecond executions. In fact, average performance is good enough for most of the use cases you have to deal with. Intuition and best practices can make you easily avoid any complexity evaluation and build your software quicker.

In this article, I'm going to give you the recipe of how to avoid performance issues without thinking much. It's based on three basic principles that will prevent you from benchmarking, writing C code or looking for outstanding algorithm.

Make it straightforward

What I mean by straightforward is that you should always take the shorter way. Just think that you shouldn't repeat twice the exact same operation. Anything that could become a bottleneck should be removed. Simple things like avoiding nested loops and being careful about function calls inside single loops, would prevent you from most headaches. Only the operations that don't consume much should be allowed (like assignments, accessing, operator-based operations and comparisons).

Let's see a simple example:

texts = ["word", "sentence is big", "number"]
tab = [1, 3, 4, 15, 53, 134, 604]
results = []

for idx, val in enumerate(tab):
     print "%d/%d" % (idx + 1, len(tab))

     result = False
     for text in texts:
         if len(text) == val:
             result = True

     results.append(result)

print results

By applying that straightforward rule, you can greatly improve that one. First you don't want a nested for loop. So let's build a hash that will make our test easier. Second, you don't want to calculate length of the tab each time. So run this operation before you start looping.

texts = ["word", "sentence is big", "number"]
tab = [1, 3, 4, 15, 53, 134, 604]
tab_length = len(tab)
results = []
length_hash = {}

for text in texts:
    length_hash[len(text)] = True

for idx, val in enumerate(tab):
    print "%d/%d" % (idx + 1, tab_length)
    results.append(length_hash.get(val, False))

print results

NB: Many people already solved problems you will face (sorting, filtering, etc.). So re-use existing libraries. You will have hard time to outperform them. If you want to be sure that you don't do any mistakes you can use tools like JSPerf

Treat machines as humans

What I said above may sound obvious to most of you but the following catch attention every time I mention it. To avoid performance headhaches, you should simply consider machines as humans. You should treat the machine with respect.

First, write clean and clear code. What's the relation? The same way writing bloated code makes things hard to understand for your colleague, it makes hard for your compiler/interpreter to optimize what you wrote.

Yes, that 'return' you put in the middle of your 300 lines function leads to an unexpected termination. It doesn't help your compiler (and Bob will hate when he will understand that the function indentation has no meaning). Short functions are a good example too. They are easier to read by humans and easier to optimize by machine. By cleaning your code, you could be surprised by the performance improvements that can occur.

Most of all, writing proper code will make bottlenecks stands out clearly. You will find them faster.

In the same way, don't overload the machine. Loading too much the mule doesn't make it run faster. Moreover, your software interacts probably with many others that won't be as fast as you expect. Let's say you implemented a parallelised algorithm that runs twenty workers simultaneously on a quad core CPU. Every jobs runs a complex database query with joins and sorting. Your code would look probably smart but your CPU usage is probably not optimised. Worst, what you achieve is a kind of DDOS to your database. Sending 100 000 queries in a second sounds maybe super cool but the database will probably not like it much, especially if it must process other queries.
Once again, respect it and let the external components the time to answer to your requests.

To take the most out of machines you have to talk to them in a proper manner and admit they can't process everything in a glimpse.

NB: It may sound contradictory but don't be shy too. Using all the resources of your machine is good if this machine is dedicated to your task.

Perception

Results of performance are often tightly coupled with the computer-human interactions. Information displayed can influence how the user feels about how fast your software performs. That's why we put spinner and progress bar every time something is running. It shows that something is happening while the machine is working.

That means, you can afford being slow if you inform well your user of what you're doing. There are many tricks to make the user waits. Displaying animation or loading a simple video game could help while you run your 2 minutes processing.

Sometimes it's even better to make things slower. Let's take a common issue Single Page Applications face. When you run a too complex calculus in the browser (on the client side), it freezes the user browser. The user experience is often hurted by that kind of freezing (it blocks the entire browser). If you add 50ms timeout (sleep) every 100 operations. It will make your processing much slower (5 seconds if you handle 10 000 operations), but due to the asynchronous behavior of the javascript engine, the browser will be free during that timeout. It will be able to perform other operations and won't freeze anymore. From the user point of view, the slower algorithm will look more performant.

Better, show him that the computer is working by displaying the already available results. It will make things less boring and the user will see that the computer does its best to fulfill its demand. Ultimately reward your user for his patience. Mailchimp, an emailing company, congratulates you with a high five at the end of your email sending process. It makes you happy and you totally forgot that you spent 30 seconds waiting before their emailing job ends.

Final Thoughts

Making things simple and straightforward, treating your machine with respect and being careful with human perception will free you from 90% of the problems you can meet.

What about the rest? 9.8% will be avoided by applying other simple methods like chosing the right tool for the right job or benchmarking every steps of your process to identify bottlenecks. 0.19% will be fixed by changing your core technology, including C dependencies or implementing cool patterns you'll find on the internet. Finally for the 0,01% you should admit that there are persons more suited than you for that job. Don't waste too much time and ask assistance from performance experts.

One last thought: I think it's great to learn theory about complexity issues at school or by yourself. It will provide you a better understanding of the problems. But once the main thing understood, let your intuition do the job.

How to deal with performance issues without effort