The Art of Computer Programming

Donald Ervin Knuth

Mentioned 5

More on

Mentioned in questions and answers.

Could anyone give me some pointers as to the best way in which to learn how to do very low latency programming? I have many programming books but I've never seen one which focused (or helped) on writing extremely fast code. Or are books not the best way forward?

Some advice from an expert would be really appreciated!

EDIT: I think I'm referring more to CPU/Memory bound.

[C++ programmer]:

Ultra-low-latency programming is hard. Much harder than people suspect when they first start down the path. There are some techniques and "tricks" you can employ. Like IO Completion ports, multi core utilization, highly optimized synchronization techniques, shared memory. The list goes on forever. (edit) It's not as simple as "code-profile-refactor-repeat" because you can write excellent code that is robust and fast, but will never be truly ultra-low latency code.

Unfortunately there is no one single resource I know of that will show you how it's done. Programmers specializing in (and good at) ultra low-latency code are among the best in the business and the most experienced. And with good reason. Because if there is a silver bullet solution to becoming a good low-latency programmer, it is simply this: you have to know a lot about everything. And that knowledge is not easy to come by. It takes years (decades?) of experience and constant study.

As far as the study itself is concerned, here's a few books I found useful or especially insightful for one reason or another:

I have started programming few years ago. and generally i use for programming C or C#. and now i want to learn some algorithms. to learn and to teach my friends.

so which algorithms do you advise for beginners?

Here it is a book set about algorithms in general -> Art of Computer Programming

I'm a self taught Ruby on Rails engineer, and I'm looking to improve my CS understanding. However, most books about data structures and algorithms are written in Java/C/C++/etc, which I don't know. Is there text on these topics using Ruby? Or do you feel Java is similar enough to Ruby that I could survive through a book?

Is there any recommended text for someone coming from my background?

P.S. Recently I've been looking at Objective C, so I'm not completely blind to statically typed languages.

There's a bunch of books on algorithms that are not tied to specific language. Check

I also recommend fundamental, still non-finished classics

I can't understand how could computer make random numbers. I mean what piece of hardware can do this? and does the computer has only one source to do this and all the programming languages use that? Thanks in advance.

It can be hardware, but most languages like Java and C# use a software construct best explained by Donald Knuth in his opus "The Art of Computer Programming": linear congruential generator.

As you can imagine, there are problems with these approaches.

There are attempts to improve it (e.g. Mersenne Twister).

There are extensive statistical tests to assess a given random number generation algorithm called the Diehard Tests. (I always picture big vehicles in a snowstorm being cranked in the cold by honking batteries when I hear about those tests.)

I'd be willing to bet that the period on these pseudo random number generators is more than adequate for your applications.

The best way to generate a truly random number is to use a quantum process from nature in hardware.

I'm a new user of C#, but learned to make small simple games. So I'm having fun and training C# that way.

However, now I need to change a moving objects speed by 0.2, so I can change the speed using an interval, without the object bugging out. I'm using 'int' values to set speed values of the objects. My objects are moving with 2 pixels per milisec (1/1000 sec). I have tried multiplying with 2, but after doing this once or twice, the objects will move so fast, they bug out.

Looked through other questions on the site, but can't find anything, which seems to help me out.


Is it possible to make an 'int', which hold a decimal value ?

If yes, then how can I make it, without risking bugs in the program ?

Thanks in advance!

Is it possible to make an 'int', which hold a decimal value ?

No, a variable of type int can only contains an integer number. In the world of C# and CLR an int is any integer number that can be represented by 32 bits. Nothing less, nothing more. However, a decimal value can be represented by integers, please see update below and comments,

In your case, I think that a float or a double would do the job. (I don't refer to decimal, since we use decimal for financial calculations).


One important outcome of the comments below, coming from mike-wise, is the fact that a float could be represented by integers and actually this was the case before computers got float point registers. One more contribution on this made by mike is that we can find more information on this in the The Art of Computer Programming, Volume 2 chapter 4.