Mariusz Krzanowski blog

Category: Other

The time complexity trap in dictionaries

We developers often think about time complexity. We are familiar with the fact that time complexity is the main factor, which helps us create really powerful algorithms. When we need to store data requiring random access, we use hash sets or dictionaries. Why do we do this? Because we know, that time complexity is O(1) for that kind of stores.

When we use Dictionary<TKey,TValue> we think that access via key is really fast. We expect O(1) time complexity. We are right when we think about algorithms usually implemented inside dictionaries. The truth is that we are right only when we consider dictionary implementation only. What we are missing sometimes is the time complexity of two operations that must be calculated while accessing the value by the key. Those operations are:

Intensive cache miss

Introduction

It is common knowledge that using cache can speed a lot our applications. In this post I would like to focus on application design using cache. The simplest scenario using cache that I frequently see follows this algorithm.

Scaling out your front end with an actor

Connecting front end and back end side is difficult. The main problem is the protocol itself. It is stateless. Connection is being re-created each time. HTTP/2 changes this behavior a little bit, but this is the transport layer only. Multiple requests/responses can be handled in a different order. It means there are many traps waiting for a developer.

Long ping response and slow network kill your application

Business requires fast software delivery. This sometimes leads to a situation in which developers are focused less on performance and more on business requirements. The problem is that badly designed architecture can unable to fix performance in the future. Many times I have explained that page with a total weight of about 2 MB is too large. I  have explained that server side has a limited bandwidth. For 1Gb/s connection speed at server side you can handle about 50 concurrent users in the same time to show them full content in 1 second. I also tell my colleagues that we have to reduce number of requests. Each request not only consumes browser connection pool limit, but also server resources. When clients are located far from our server, pages can load very slowly. When I was on Galapagos I realized (again) how slow can a network connection be and how important wise web design is.

Powered by WordPress & Theme by Anders Norén