Decades later, computing is getting closer and closer to the “perfect” number of the sequentially information organization (in computing or, to be simpler, in a library).
The “Library Ordination Problem” (“Book-Sorting Algorithm”) or, more formally, the problem of “List labeling”, It was first presented in an article in 1981.
It has to do, precisely, with organization. Imagine that you have all your books in a row on the shelf, organized by the author, with a space at the end. If you want to introduce a new book by Isabel Allende, you won’t put it at the end of the shelf, but with the author’s books.
Then you will have to reorganize the shelf, opening a place for the new book in the middle of the line, which can do some work. Would it not be preferable to have the absence spaces open? And how would you do that?
“It’s a very important problem,” not just for librarians, but for computingguarantees Seth Pettie, computer scientist at the University of Michigan, because many of the data structures in which we currently trust store information sequentially.
A common way to solve this problem is to realize how long it takes time to insert an item individually, explains. Naturally, this depends on the number of items that exist, a value typically denoted by n. In Isabel Allende’s example, when all books have to move to accommodate a new one, the time that takes time is proportional to n.
Researchers seek to know if it is possible to conceive an algorithm with an average insertion time much lower than n. For over four decades, no one has managed to do it. In 2004, a team that, for any soft or deterministic algorithm, is not possible to get an average time of insertion better than (log n)2.
But in 2022, Bender, Kuszmaul and four co-authors created a “independent of history”, non-suave and random algorithm-which finally reduced the upper limit, downloading the average insertion time for (log n)1,5.
“It’s like using encryption to make your algorithm faster,” said Kuszmaul. “What looks a little weird.”
Bender, Kuszmaul and others have made an even greater improvement with last year’s work. They beat the record again, lowering the upper limit to (log n) times (log log n)3 – Equivalent to (log n)1.000…1. Therefore, much of the theoretical limit, the last limit of log n.
“The way to make him a good thing was to be strategically random As for the amount of history to take into account when we make our decisions, ”explained Bender.
Now there are new challenges. “We know we can almost log n“Bender said,“ But there is still a small gap ” – the tiny term log log n which is interposed in the way of a complete solution. “We don’t know if the right thing to do is lower the upper limit or increase the lower limit.”
According to investigators, it is much more likely that any future improvements concern the upper limit, Garnate Pettie, making it down to log n. “But The world is full of strange surprises. ”