Algorithms, Cat Videos, and Human Limitations

by Luke Jacobs

“If math is the study of truth, Computer Science is the study of complexity”

Unknown


“Oh, the depth of the riches and wisdom and knowledge of God! How unsearchable are his judgments and how inscrutable his ways!”

Romans 11:33


A widely-accepted view of Computer Science is that it is just engineering. You use it to program computers, make some robots, or build websites, and that is pretty much everything there is to it. I think that this is a much too simplistic view of Computer Science.

The main study of Computer Science is algorithms. Many people do not realize that the development of algorithms has been not just helpful, but foundational to building modern society. Algorithms not only bring us into an exploration of the technological world we live in, but also reveal to us our place in the universe.



Algorithmic Complexity



Algorithms are lists of instructions to complete a certain task given some information. In the real world, we execute algorithms all the time. One example is a rubric. A rubric outlines qualities that an assignment should contain to get the best score. A teacher looks at the rubric and follows its instructions, thus executing an algorithm. A rubric has an input and an output. The input is an ungraded assignment and the output is a grade. An algorithm is just a set of directions that tells you how to work.

Algorithms have properties to them. Programmers are most concerned with a property named “time complexity.” Complexity describes roughly how many steps an algorithm is required to take given a task with a certain difficulty. Here is a visualization of some time complexity models (the technical term is Big-O complexity):



The steeper the curve, the longer it takes for an algorithm with that time complexity to finish its task. Programmers want to make algorithms that do not require significantly more steps (“operations”) as its task difficulty increases (“elements”).

For example, say that Mr. Jacobs gets sick of grading multiple choice questions on science tests, so he decides to blow all the science budget on a Scantron machine. The machine takes 5 seconds to grade a test, no matter the number of multiple choice questions on the test. The machine is executing an algorithm which takes test answers and returns a grade as an output.

We can model the time complexity of this algorithm for a given n. The variable n represents the task load of an algorithm. As n grows in size, so does the difficulty of the task presented to the algorithm. Let’s say that n equals the number of multiple choice questions on a test. In terms of n, how long will it take for the machine to grade 10 tests? Remember that the machine can grade any test in 5 seconds, no matter the number of questions. This means that n can be 100, and the machine will still grade one test in 5 seconds. To grade 10 tests, this algorithm’s time complexity would be 50 seconds, a constant time, regardless of n. Constant time complexity is the ideal for programmers, but in practice, algorithms rarely take a constant time to process a variable amount of data. In real life, most algorithms take longer to complete a task as it grows in difficulty.

Now let’s say that n equals the number of tests that we feed the machine. Our time complexity would change, since n has changed. Since the machine requires 5 seconds per test, the complexity would be 5n, linear time. Now the time it takes for our algorithm to finish is dependent on n. As n increases, so does the our algorithm’s execution time.

Up until this point, the math behind time complexity has been almost self-evident, and maybe a little boring. You might start thinking that this has nothing to do with our society, let alone our place in the universe. However, time complexity becomes interesting when it grows exponentially.

Let’s move on to a new example to illustrate what exponential time complexity looks like in the real world. Say you want to break into Mr. Vahle’s computer. Mr. Vahle is a cunning man, so he does not use a password that you can guess. He chooses instead to use a random number with n digits. If the number of digits (n) he chose was 1, his password would only be 1 digit, so we would have to guess the numbers 0 through 9 to try all the possible combinations. If n was 2, we would need to guess the numbers 0 through 99 to guess every combination of 2 digits. In short, it would take you at most 10n (10 multiplied by itself n times) guesses to break into his computer. If it takes a constant amount of time to guess 1 password possibility – say, 2 seconds – then the time complexity of our task would be 2(10n) seconds, exponential time.



Intractable Problems



As anyone who has tried to guess a long password knows, if Mr. Vahle chose a large value of n, it would take an unreasonable amount of time to break into his computer. In Computer Science, we call this type of problem "intractable." Intractable problems have no fast algorithm to solve them. Solutions to intractable problems with high difficulty take so much time to solve that the fastest supercomputers in the world could run for billions of years and never find an answer. This is similar to how it would take us years to guess all the password combinations to Mr. Vahle’s computer.

Intractable problems do not go away as we develop faster computers. Let’s say we buy a robot to guess Mr. Vahle’s password. The robot can guess passwords 10x faster than we can, but this increase in speed does not defeat the problem we face. If Mr. Vahle added just one more digit to his password, it would take us the same amount of time to guess all the password possibilities as before we bought the robot. By increasing n (the number of password digits) by just 1, he multiplied the number of password possibilities by 10. Even if we could make guesses 100x faster, Mr. Vahle could simply add another digit to his password, and that would thwart our attempts to speed up the password-guessing process. As you can see, as the n of an intractable problem increases, the amount of time required for an algorithm to solve it skyrockets. Problems that have been proved to be intractable are problems that are so hard that no intelligence in the universe can solve one without resorting to testing every possible answer.

You might think that intractable problems are problems that mathematicians would avoid. How could they possibly be useful if we cannot compute their solutions? You might think that we should just abandon that area of mathematics and move on to a more applicable one. However, an incredible behavior of math is that developments in a seemingly useless field can find strange applications in the real world.



Mathematics and Usefulness



The mathematician Bernhard Riemann did work in the 1850’s on geometry with more than 3 dimensions. (Yes, this actually exists.) At the time of his work, this area of math had no usefulness. After all, all the geometry in our world is 3-dimensional, right? Riemann did not care that he was devoting his time to math that may never be used; he just wanted to learn about the world for the sake of exploration. Being a Christian, he viewed his job as a way to serve God by delighting in Creation. Delighting in Creation does not involve searching for ways to capitalize on it; it involves contemplating the beauty of it.

In his lifetime, almost everyone would have thought that Riemann’s work on multi-dimensional geometry would have no purpose, but math reveals itself in strange places. Around 50 years later, another German man – you may know him – named Albert Einstein wrote a paper on a concept called “general relativity.” It explained how space and time are connected. Together they are like a fabric that can be bent, warped by stars and planets. In order to represent this effect, Einstein had to use multi-dimensional geometry, Riemann’s “useless” work. General relativity is now used to engineer GPS systems, so you can thank Riemann every time you pull up Google Maps.

It should not surprise us then, when I say that intractable problems can actually have a use. In fact, one particular intractable problem is so important that modern society would crumble if an algorithm could solve it efficiently. The problem is that large numbers are really hard to factor.



Semiprimes: The Most Important Group of Numbers No One Knows About



In algebra we learn how to do prime factorization, the process of reducing any number to a list of prime numbers:



25 = 5 * 5
100 = 4 * 25 = 2 * 2 * 5 * 5
7 = 7 (cannot be reduced further, because it is a prime itself)



As you can see, primes are the building blocks of numbers. Every number is just a combination of them. Now let’s look at a special group of numbers named semiprimes. Semiprimes are the result of multiplying two primes. Here are a few examples:



25 = 5 * 5
2201 = 71 * 31
979458869581637 = 13935491 * 70285207



The most important property of semiprimes is that factoring them is an intractable problem. A number like 100 is easy to factor because you can keep breaking it down into smaller and smaller pieces. First you can divide it by 2, then 5, then 5 again, and so on. In this way they are like a Lego set. It doesn’t take much time or effort to take apart Legos; little kids do this all the time. On the other hand, semiprimes are like atoms – a little harder to split apart. It took a huge group of physicists and mathematicians to learn how to split atoms. Semiprimes have been around as long as atoms have, except we still have not found an effective way to split them. They are actually harder to split than atoms. We have built amazing particle accelerators that can reliably break atoms, but the seemingly-simple problem of factoring semiprimes has eluded mathematicians since the beginning of time. Semiprimes are hard to factor because they are only comprised of two primes. You cannot break them into more than 2 pieces. Factoring them is an intractable problem, but this does not limit their usefulness; in fact, it actually makes them some of the most useful numbers.



Semiprimes and Cryptography



The property that makes semiprimes essential to cryptography is that they can be generated easily and disassembled easily given one factor. Modern computers can generate a random, 150-digit prime number in under a second. That’s pretty fast to generate a number that big. All a computer needs to do is generate two primes and multiply those together, and then it has a 300-digit semiprime that is virtually unfactorable. To the outside world, its factors are a secret. Only your computer knows the original factors, which act as your password. Because you know your semiprime’s factors, you can encrypt and decrypt messages that cannot be decoded. The state-of-the-art cryptographic systems that are employed by governments around the world are based on the simple properties of these special numbers.

But cryptography is not just for governments or people who need to hide things. We use it everyday without knowing it. Cryptography is used in the HTTPS protocol. HTTPS is used to secure the communication channel between our devices and the websites we visit. This is necessary because “secure” wifi networks are not entirely secure. Wifi passwords protect the network from outsiders, but all the information sent and received within the network is completely visible. Without a secure channel provided by cryptography, anyone could see the cat videos you watch on public wifi (we know you do it). Without encryption, your devices’ communication with YouTube would be available for everyone to see, which means that while you watch a top ten video of the world’s largest cats, hackers would be watching your “secure” Google password appear on their screens. Everyone would be at risk of having their identities stolen, their bank accounts robbed, and no one would be certain that the person they are messaging online is actually their same friend they talk to in the real world. Governments would be unable to send messages to their soldiers on the battlefield, and nuclear war would probably be imminent. The world would be chaos. We are dependent on cryptography for much more than we realize.



The Meaning of the Intractable



Putting the problems of the world aside, can you see how semiprimes and intractable problems are beautiful? Studying intractable problems and number theory is like studying the limits of our intelligence, what we as human are able to solve and what we cannot even touch. We learn about primes in middle school, but some of their simple properties stump even the best math PhD’s in the world. Semiprimes are just one example of intractable problems that we have discovered; there are about a hundred that have stumped computer scientists. That might forever be the case.

I believe that God has created the world in such a way that we will always keep reaching. I doubt that we will ever come to the end of science. God wants us to know that we are creatures placed in a world that is beyond our comprehension, and our only recourse is to recognize him as our Creator and to rely on him for guidance. Computer Science, the study of algorithms, should put us in a state of humility because we are constantly confronted with barriers that we may never solve. We have come a long way in overcoming some of those barriers, but it looks like there are some barriers that may be insurmountable. This only reveals more of how God is infinite, and we are unbelievably finite. Computer Science is not just about programming video games or building robots that can do work for us, but more importantly about studying what humans can solve and what leaves us utterly puzzled.



Thanks for reading!