Quantum Computing (C.S. Version)
Lightly going over the physics while diving deeper into the computer science of quantum computing
Forward
I want to begin by stating that I am NOT a quantum expert, nor do I pretend to be one on the internet. I am merely a curious individual who enjoys learning and explaining new technology to the masses while simultaneously presenting its use and benefit for future application. This places me in an advantageous position for you, the reader. Since I am not an expert by conventional definition I am able to more clearly explain these mind-bending concepts to you, or at least try to.
I was thinking of writing a massive article explaining the CS of quantum, but I decided to divi it up for ya’ll into more digestible bite size chunks. This will make a full circle back to crypto and PlutusX soon enough. For now, buckle up, blow off the dust on those math books and get ready to dive into the great unknown realm of Quantum Science!!!!
We are nearing an era of Quantum supremacy. Currently, quantum computing is at the same stage as conventional computers in the late 60s. We are on the cusp of developing scalable qbits and as development continues price will begin to drop. As cost of entry becomes less expensive more adoption can occur. Google and IBM are currently dominating the industry with both research and development. IBM already offers you the ability for interested prospects to use a quantum computer with their IBM Q at home.
Try it out here (NOT AN AD): https://quantumexperience.ng.bluemix.net/qx/experience
What is quantum?
First, lets get into some probability theory. Gerolamo Cardano was an Italian polymath in 1500s, and he created the thought experiment on probability theory. His understanding of probability was summed up in 3 axioms. His work wouldn't bear fruit for several decades till it was revisited in the late 16 hundreds once it was printed.
Definitions and Preliminaries
In order to understand the axioms for probability, we must first discuss some basic definitions. We suppose that we have a set of outcomes called the sample space S. This sample space can be thought of as the universal set for the situation that we are studying. The sample space is comprised of subsets called events E1, E2, . . ., En.
We also assume that there is a way of assigning a probability to any event E. This can be thought of as a function that has a set for an input, and a real number as an output. The probability of the event E is denoted by P(E).
Axiom One
The first axiom of probability is that the probability of any event is a nonnegative real number. This means that the smallest that a probability can ever be is zero and that it cannot be infinite. The set of numbers that we may use are real numbers. This refers to both rational numbers, also known as fractions, and irrational numbers that cannot be written as fractions.
Axiom Two
The second axiom of probability is that the probability of the entire sample space is one. Symbolically we write P(S) = 1. Implicit in this axiom is the notion that the sample space is everything possible for our probability experiment and that there are no events outside of the sample space.
By itself, this axiom does not set an upper limit on the probabilities of events that are not the entire sample space. It does reflect that something with absolute certainty has a probability of 100%.
Axiom Three
The third axiom of probability deals with mutually exclusive events. If E1 and E2 are mutually exclusive, meaning that they have an empty intersection and we use U to denote the union, then P(E1 U E2 ) = P(E1) + P(E2).
The axiom actually covers the situation with several (even countably infinite) events, every pair of which are mutually exclusive. As long as this occurs, the probability of the union of the events is the same as the sum of the probabilities:
P(E1 U E2 U . . . U En ) = P(E1) + P(E2) + . . . + En
Although this third axiom might not appear that useful, we will see that combined with the other two axioms it is quite powerful indeed.
Thats the math of probability now lets see how particles interact with their environment. The special properties of sub-atomic particles discovered by Niels Bohr and Albert Einstein led to the double split experiment and as a result whole know understanding of physics was born.
Double split
We presume that particles are to interact with its environment adhering to our visible understanding of physics. Often we are misled in school learning that the nucleus is surrounded by circulating balls of electrons when in reality on a quantum level the electrons are in super positions and are everywhere at once. Only when you decide to measure the particle does the location become finite. That theory has recently been challenged as the result is suggesting to only be a probability of the location.
This evokes the thought experiment of Schrödinger cat. A cat resides inside a chamber and is presented with a radioactive substance which is decaying. The cat is neither alive nor dead until the experimenter decides to measure the results.
Now that we know sub-atomic particles do not subscribe to our traditional understanding of physics, we have since learned of the interesting states that they can operate in.
Super Position
Super position is when a particle is in all positions and no positions at once, simultaneously, hence its ubiquitous name. Think back the the thought experiment above. The cat was both alive AND dead, but only when examined was the result measured. Sub-atomic particles act in a probability wave. What that means is the particle is omnipresent within its respected domain until measured.
Entanglement
Quantum entanglement is when 2 particles are participating in “spooky action at a distance,” as Einstein quoted. Particles, for all intensive purposes are put through a Black Box Method and engage in entanglement where one particle is an up spin while the other is in down spin. No matter what you do or the distance, entanglement continues to keep this bond.
There’s an quantum other state called Quantum Tunneling with is real life teleportation, however, its not as necessary for this article.
Okay, now that you have a preliminary understanding of quantum physics lets dive into the actual CS substance.
Classical vs Quantum
Classical computers use bits which can interact as a 1 or a 0 but never both and never neither. Morris Law suggested that transistors and technology as a result will increase by a factor of 2 every 10 years. We are currently nearing our pinnacle of transistor power and its physical limitations. This is where quantum computing comes into play.
Quantum computing is probabilistic while classical computers are deterministic. The reason is because Cardano said that quantum particles act as probability wave not finite in a singular location. Arther suggested that you must code a program with positive interference for an outcome that you want and negative interference for an outcome that is NOT desired. The inherent problems with quantum currently is its application and scalability. Classical computers do certain tasks extremely efficiently. Similarly, many problems are not suited for probabilistic computing, or the processing power is organically unnecessary.
An other challenge we are attempting to overcome is verification. Quantum computers with less than 50 Quits are able to be verified by super computers using brute force, but because of the 2^n exponential anything past 50 Qbits is otherwise vurtually impossible to verify and we will be met with the dilemma to believe the outcome or not.
Mach-Zehrender
Wanting to control the ubiquitous interference that particles experience on the quantum level the two created this simple interference gate. Quantum interference is omnipresent. Here is a diagram of the gates I created:
The Dirac Vector Notation
One bit with the value of 0 is written |0>, while the other bit with the value of 1 will be written as |1>.
If you struggle remembering this think of it as an array indexed at 0, and if theres a 1 in the 0th index that means its 0 and if theres a 1 1st index it will be a one.
Matrix Vector Multiplication
Understanding basic matrix multiplication is imperative to move forward when discussing the
Here is the most common Matrix block called the Identity matrix. To the right you can see a 3x3 matrix where the diagonal row is the same as the result. Hence its name. To the left the example shows that the matrix is scalable so long the matrix follows the same principles.
Similarly, there is a matrices module that flips the bits. Like the Identity Matrix this module had the middle two rows flips which in return flips the results. Here is the visual representation:
Operations on ONE a Classical Bit
Before we dive into the quantum computation we must first understand how Classical Bits (cBits) work. At the very least the most basic functions that are typically applied. Here are the top four represented as a functions and diagrams.
- Identity
- Negation
- Constant-0
- Constant-1
Reversible Computing
Reversible computing is a process which allows the user to know the input values based on the output if the function used is named. For example, if we were to pass a bit through the negation function you will know if the input is 1 if the output is 0 and vice versa. Applying this reasoning with constant functions would yield a 50/50 probability of guessing the correct input, hence those functions being non-reversible.
I’ll probably wrap it up here for now. Review this page for a while and really understand the concepts explained in the videos. Next we will talk more depth on H and X gates commonly used in quantum programs, and lastly we will explore applications for quantum computing.
Thanks for reading! If you enjoyed this article, make sure to applaud us down below! Would mean a lot to me and it helps other people see the story.
Connect with me:
Instagram | Twitter | YouTube | Group Chat
Written by: Angel Mondragon.