Skip to content

Instantly share code, notes, and snippets.

@Recoskie
Last active October 19, 2025 22:35
Show Gist options
  • Select an option

  • Save Recoskie/de34fad9c803c670795ba85d721008c8 to your computer and use it in GitHub Desktop.

Select an option

Save Recoskie/de34fad9c803c670795ba85d721008c8 to your computer and use it in GitHub Desktop.
Modern artificial intelligence, data algorithms, and the limits of computation.

Modern artificial intelligence systems and data algorithms that are great for modern application development, but are only sometimes fully applied to software and our daily lives.

This note/document teaches you how information breaks back into its original form, how quantum computers can perform such operations faster than classical computers, and why.

The smallest unit of information

We can't have information using a single symbol, as there are no combinations to compare with it unless we divide the whole into two parts, such as 1 and 0, at the most abstract level.

We put everything in place values of two, and then compare the information and real-world data to its patterns and calculation functions (using the sum of products) that express the data, information, or mathematical calculations. This may not sound easy, but it is straightforward and anyone can understand it, as it is a basic and universal method. It may even shock you at how useful, simple, and widely implemented it is.

Numbers using only two symbols (binary)

See Binary number for more details. Counting things in different quantities per column may seem complicated if you are learning this for the first time, but it is straightforward, and I will break it down for you in detail, providing a tool at the end.

People are generally used to counting things in ten symbols: 9 + 1 = 10.

As soon as we add 0 + 1 = 1, it becomes a 9 in a binary number, as we are at the edge of our number, using only two symbols to count zero and one.

EqivilentBinaryValue
0 + 1 = 1 (9)0 + 1 = 11
9 + 1 = 10 (90)1 + 1 = 102
90 + 1 = 91 (99)10 + 1 = 113
99 + 1 = 100 (900)11 + 1 = 1004

We count the same way we usually do, except that 9 + 1 = 10 is the same as 1 + 1 = 10, using two symbols rather than ten symbols per column as 1 + 1 = 2, and there is no digit 2, so we move to the left and add a one to the next column same as 9+1=10 as there is no tenth symbol past 9.

We can convert all binary numbers into numbers composed of digits 0 to 9 and add one to determine the next binary number. 1011 + 1 = 1100 is the same as 9099 + 1 = 9100. Additionally, 9100 changes to 9900 because there are only two digits instead of ten in a binary value before adding to the next digit to the left; therefore, all other values between are invalid, except for nine and zero in scale.

Counting to 10 = 2 in binary twice makes 100 = 4, since we can only count to each digit twice instead of ten times. This also means that counting to 100 = 4 twice makes 1000 = 8 as we add a new digit to the left while counting.

We know the value 734 is 700 + 30 + 4 = 734 in much the same way that we know 1100 is 8 + 4 = 12 in binary per column.

We know that 7.34 is the same as 7 + 3÷10 + 4÷100 = 7.34. In much the same way that we know 1+1=10 is 2 in binary and that 0.1 + 0.1 = 1.0, which is 1 in value, so dividing 1 by 2 gives us the value 1÷2 = 0.5 in value from the decimal point 0.5+0.5=1. We know that 100 = 4 in binary; if we move this behind the decimal point as 0.01, we need to add 0.01 four times to reach 1.00, which represents the value of 1 in counting after the decimal point. It then has the same value as 1÷4=0.25 from the decimal point.

We then know that 0.11 is 1÷2 + 1÷4 = 0.75 from the decimal point in counting. We know that 0.01 is 1÷4=0.25 from the decimal point. We know that 11 + 1 in binary is 100 and that 0.11 + 0.01 is 1.00, the same as 0.75 + 0.25 = 1 in parts from the decimal point.

The decimal point marks a position in the number; meanwhile, the values below the decimal point are counted the same as regular numbers. This makes each place value a division out of the number of digits we use per place value, the farther away we move to the right. 7.34 is the same as 7 + 3÷10 + 4÷100 = 7.34 when using ten digits per place value from the decimal point.

Every number value can be represented in place values of two, much the same as place values of ten units away in counting to the right or left of the decimal point. You can use the following tool: rapidtables decimal to binary. This shows the steps of dividing a number up into place values of 2 when counting. Counting things in different quantities per column is part of a universal number-based conversion procedural method called radix and base. Mastering radix and base conversion is unnecessary, but a basic understanding is useful. The babyloinas used 60 symbols before moving to the left each time. This made each digit 60 times its place value each time we moved to the left or 60 to the right of the decimal point. It's that binary is as low or abstract as we can go when comparing data, which makes for easy analysis. It made possible all forms of arithmetic, math and complex hardware that run complex physics and even simulate reality.

Breaking down information

In data science, we form the sum of products (SOP) of our program's code, math, or real-world data. We can remove all matching sums/comparisons, leaving behind the smallest logic that translates to math arithmetic and comparison logic for our program code or logic. See the following Truth table SOP. Don't worry if you still need to get it, as I will break down the terminology and make it easy to understand.

The SOP method applies to building computers as small as possible, as well as to the code running on them or to analyzing real-world data into patterns or arithmetic calculations and math theorems to run on a computer or as a dedicated digital circuit.

There are specialized software development tools that optimize code for developers and utilize the sum of products to push hardware to its limits. The code generated by these compilers for the computer to follow often does not even match the steps the developer put into the code.

At the smallest scale, we can represent all numbers as ones and zeros in a positional number system based on two digits per column rather than ten. We can even represent numbers like 3.1415 by adding a decimal point to our positional base two binary values.

ABOut
0000000
0001001
0010010
0011011
0100001
0101010
0110011
0111100
1000010
1001011
1010100
1011101
1100011
1101100
1110101
1111110

This is called creating a truth table for what you are analyzing. This is an addition in its most abstract form, using 1 and 0 to express the values.

When we have many inputs and outputs, as shown in the example below.

ABOut1Out2
10011101
01100111

We join all inputs and outputs in each row to make it easier to analyze.

InputsOutputs
10011101
01100111

We convert them back into individual inputs and outputs at the end of the analysis. To form the sum of products for our data, we assign that 1 = true and 0 = false.

In English, we have a word called "NOT", which implies that if something is not true, it is false and that something that is not false is true. Another English word allows us to compare two or more statements in one sentence: "AND". If 2+3 is five and 7+1 is 8, then the sentence is true, as both comparisons are accurate; if one of the statements is false "OR" more, then the output is false = 0.

To solve the data precisely into what it is, you must compare each zero digit using "NOT" under one input combination to make them true, then use "AND" to output ONE if the combination is set.

If we have input "A" as 10 and input "B" as 01, then flipping all the zeros to ones as follows 1, NOT(0), NOT(0), 1 = 11, 11 will only be all ones if the inputs match. We then compare using "AND", which will only output a ONE if all values are ones. This limits to just one combination of the inputs.

x1 and not x2 and not x3 and x4

The values x1 to x2 are input "A", and the values x3 to x4 are input "B". The output will only be one if the input "A" is 10 and the input "B" is 01.

We take our comparison, which is only one when A=10 and B=01, and insert it into our output in places one and two, forming output 011=3. To do this, we write the comparison twice, shift to the left by one " << 1" for the second position, and to the left " << 0" for the first position; we then write this as

"(x1 and not x2 and not x3 and x4) << 1 = 10"

and add

"(x1 and not x2 and not x3 and x4) << 0 = 1".

To add digits together from the two expressions, we have the English word "OR", which is only false if all comparisons are false; this means if one comparison is true in the sentence or all-sum, the output is one. In our output, we use "OR" = "|" to combine the outputs together under the set condition.

This creates a single line that outputs the values we want to a specific set of inputs organized into expressions of "NOT", "AND", and "OR" in a three-step language.

The more digits that are ones at the output of a given input combination, the more times we have to write the same comparison and move the one output to the position of each digit of the output, then "OR" them together in one line.

You create a new line of code (expression) for input to the output of each row in the table of what you are analyzing. You then combine all separate lines with "OR". This creates the maximized version of our data or program logic.

This is the sum of product expression for our code/program, real-world data, or mathematical arithmetic logic.

This allows us to align anything we like to an absolute answer by eliminating all matching comparisons between inputs, thereby creating a combined output. You can see this done under the Log2 Int32 V3 function.

What is nice about this is that it is easy to understand and allows us to find the smallest calculations or math theorems for everything, from the smallest code for an application or program, to even the most complex ones. Additionally, it can be entirely automated with artificial intelligence, making it easier to solve extremely complex problems.

This allows us to find the smallest expression for data or any program or code, but we can still further reduce the size of our analyzed data, functions, or program code running on the computer.

It also enables us to determine the most effective way to write conversion algorithms or generation algorithms.

Reducing into code and arithmetic

The last step in our analysis is to remove the combinations of xor and put them in the expression or code to shorten it. An XOR means "NOT A AND B" or "A AND NOT B", which means one or the other, but not both, are true. We call XOR a derived function as it is a combination of NOT, OR, and AND to create XOR.

This is because the sum of products for addition reduces to an XOR comparison 0+1=1, 1+0=1, which is 1 xor 0 = 1, and 0 xor 1 = 1, but 1 xor 1 is 0. When we add 1+1=10 in binary, we do 1 AND 1 = 10 for the next place value. When we take out all matching comparisons of addition in the sum of products, we end up with the pattern described here. We often call "XOR" a sum for the columns and "AND" a carry.

We then remove all (AND<<1)/XOR expressions with the addition operation in the code or use an addition logic circuit. We then replace all combinations that are "NOT(A) addition B" with the subtraction operation in the code or use a subtraction logic circuit. We replace all repeated additions that move to the left by one with multiply, and all subtracts that move to the right with divide.

Lastly, the difference between zero creates less than, greater than, and equal to. Any remaining XOR operations are equivalent to comparisons in programming code.

Relatively, all arithmetic functions are derivable from one another geometrically. There is a script at the end of this note that uses only "AND", "OR", and "NOT" to make every math operation and even comparison by implementing each derived arithmetic operation in combination with the next ones by function.

It is essential to note that the arithmetic logic for multiplying, dividing, adding, and subtracting does not change when we add a decimal point, as it is only a placeholder in our value. For example, adding four twenty-five times is 100; 0.04 added twenty-five times is 1.00, using the decimal point as an imaginary placeholder. The same applies to binary values.

Practical applications

You can design a simple function that translates truth tables or data into the smallest math function possible. You can even write a program that tests your program logic and code, and writes the smallest version of your program's code and logic. Optimization compilers already do an excellent job at doing this for the software developer. The resulting compiled instructions in the code do not match what the developer wrote for the logic and math of the program, usually because of auto-optimization. However, auto-optimization of gigabyte projects takes forever to compile with full optimization, so compilers resort to shortcuts and breaking down your code into smaller sections that can be auto-optimized.

This means optimizing your program's code and logic using the sum-of-products method, and further refining it by hand, as what needs to be fast is still essential.

I also found it fun breaking down "Math.sin/inverse", and "Math.cos/inverse" this way. It changes into a pattern, much like the division and multiplication functions. We can match our currently defined division and multiplication functions to the pattern to calculate sine and cosine faster as a series. Generally, we do not need to calculate it very far to be accurate, and we don't need a big logic circuit for it. This is because we are limited to the accuracy of a 64-bit floating point number, and higher accuracy is optional and generally not useful. The concept of floating-point numbers was to add a value to our binary numbers that specified the position of the placeholder in our binary numbers. The arithmetic logic for addition, subtraction, multiplication, and division remained the same universally.

Working this way, you can design all math functions and custom functions like an artist, where nothing is impossible. You can also define any function about anything in the world around you using real-world data and even derive all the math theorems we have discovered.

If you take the time to map what you want to solve, anything is possible. There are no limits to what you can solve or build the answer to. You also shouldn't need to rebuild the book of physics or calculus by analyzing what is from real-world data to what it really is.

You should be able to visualize your code and methods without resorting to automated breakdowns of your program's logic functions or auto-solving real-world data for everything in your reality.

However, it can be fun to reconstruct everything based on what we analyze in the real-world from real-world data. It can also give us a much deeper understanding when visualizing patterns/code or looking at a math formula. It also gives us a much deeper understanding of the world around us.

You can also use it to solve problems that originally had no answers in a quantum-solved way and automate the entire task. The difference is that quantum computers can use entanglement to align the answer across qubits, using measurement to give us our answer instantly.

Quantum breakdown of data

In quantum computing, we have qubits that can be "up" or "down" as a "one", or "zero", or be both "one and zero in a superposition state". We can do the same thing with qubits as we did with "AND", "OR", and "NOT" except much faster as the qubits align to the answer using entanglement and measurement rather than us comparing everything in "AND" and "NOT," then "OR" everything together and cross out matches to outputs causing us to align to our answer. This solves all problems that are in place values of multiples of twos that require us to compare everything and find the result, known as O(n), varying time operations. This also means programming a quantum computer to do math along qubits is done differently to align to an answer. This also means some of the simpler calculations are easier to do on a digital system rather than on a quantum system. Aligning things to an absolute answer and analyzing data is faster on a quantum computer. A digital system can do such alignment and solve to an absolute answer using the sum of products (SOP) or, in reverse, using products of sums (POS) and refactoring the code or data we are analyzing, giving the smallest function for our real-world data or code. Both SOP and POS give the same smallest function or code; it's just that POS is faster when there are more ones than zeros, as we end up with fewer comparisons to reduce to get to the same smallest code or function.

We also use these methods to determine how to write complex algorithms or code in the most efficient way possible. Often, finding shortcuts allows us to avoid calculations and comparisons and reuse as much as possible when combining our outputs.

Faster data analysis

Generally, the SOP and POS methods of analyzing data and reducing code or large data from the binary back into arithmetic math functions and comparison into code and algorithms are the most abstract and straightforward we can go for data analysis and are the brute force methods for breaking everything down and combining everything back togther into functions and code or circuits to combine togther to build dedcated hardware.

Generally, we can subtract a prior number into the next to find the difference between the following number, which is much faster than a sum of products at the most abstract representation of our data or code.

We can combine the differences into expressions of our data as dimensional geometric equations, as there is a pattern that runs through all arithmetic combinations. This method generally gives us the smallest answer to the data differentially, making such alignments faster on the digital system using the defined math functions "addition", "subtraction", "multiplication", and "division".

Digital computers generally leverage this geometric structure into digital logic gates to make smaller computational combinational arithmetic logic units that can compute all math operations in a more compact package per splice in positional base two binary numbers.

If you want a faster, more modern automated method that solves all things matrixed geometrically faster without using the sum of products. You can use my Q-AI matrix: Link. It also has good documentation and shows the structure of all things and how it builds geometrically in as simple an explanation as possible.

You may also find it interesting how we can pick a single number apart into what it is see FL64 Number analysis library.

Scientific perspective

Scientifically, we know everything emerges from something simpler or abstract that creates complex systems between interacting parts called emergence.

You might also like the game of Life simulation. The game of life is a great example; see Let's Build a Computer in the Game of Life. In this video, you learn that "AND, OR, NOT" is a complete set of functional gates and what a Turing-complete system is.

You may also be interested in learning about configuration spaces; see Traversing Configuration Space.

Lastly, you may enjoy the following: In which 2swap Solved Klotski.

We know that we can solve everything or anything by mapping it this way, using the stat solve in the sum of products truth tables, or by using the differential method employed in my Q-AI matrix tool. What makes quantum computing so special is that it can solve all possibilities linearly into an answer much faster. See how quantum computing will work.

We can take emergence to its most extreme and think about the zero dimension; see the zero dimension explained.

Energy is neither created nor destroyed. It just is and sums to zero. The object that contains it all is called a singularity.

If we take a circle and split it into two halves, with one half representing the positive and the other half representing the negative, the sum of the two halves equals zero. This is the initial fluctuation that begins anything at all called a quantum fluctuation, still summing to zero. It is also measurable using the Casimir effect and is the frontier of zero-point energy. An infinite and renewable form of energy research, see what if we harnessed zero point energy. The following video delves even deeper into the details, and it's also worth noting that this is a collective endeavour; see unlocking the quantum vacuum. However, I recommended watching it from the start.

Both white holes and black holes are opposite to each other and begin at the singularity.

We can think of these two extreme points as mirroring binary digits 1 and 0 from the whole (the singularity) at the most abstract level. However, within this singularity exists every possibility.

This zero, encapsulating 1 and 0, underlies the unfolding of reality from fluctuations at a singular point, embodying the wave function of the zero-point field that encompasses all known reality. Energy is neither created nor destroyed and equally sums to zero. All phenomena emerge from this void, akin to nothingness as infinite potential.

We can select paths choosing between 0=L and 1=R, much like the branching of a tree. As we navigate selecting branches, we can choose a new L=0 or R=1 that is added in continuation, while all additions contribute to zero in balance, creating perfect symmetry and expansion. We can generate all the information that comprises our current reality in a simulation spanning from a whole that contains it all, or create any song or video.

Our reality is a small place within this infinite expansion. Determining the exact path to our particular reality poses a highly complex problem. We use the microwave background of the Big Bang to map our particular universe. We have a fairly good model, but sometimes, we have to change the model to match what we did not know could exist based on observation of our emergent universe. Sending the James Webb telescope into space enables us to analyze a greater portion of our universe in greater detail, providing a more in-depth timeline of our emerging universe and everything within it.

Mind benders explains this in detail in a concept called the Library of Babel. Which turns out to be the most accurate description of what reality is and how it emerges at the smallest scale, revealing that everything is contained within a small section of what we call 'here and now'.

You may also like the following: Universe is a single whole.

Everything remains connected to a single fabric or whole, and the states of all units depend on one another and interact and create emergent properties from this whole in cause-effect relationships.

The separation between objects and particles may not exist as well, and all things are equal to zero as a single indivisible unit.

Everything spans from zero in quantum fluctuation and remains as a single invisible unit. It begins at a singularity which spans to what you call the Big Bang. The singularity existed before the Big Bang. There may be many more universes than just ours.

Alan Watts combines both scientific understanding and our existing knowledge in a simplified explanation.

Realize that everything connects to everything else (Leonardo da Vinci, 1452-1519).

Nothing and Something go together as a whole - Alan Watts.

One Suchness - Alan Watts.

Because we can build functions that calculate anything to precisely what it is, we also have a problem with data security and encryption.

When we encrypt information or data, we often transform the data using a hidden value or key that runs through a process on the data.

We write a truth table and assign the numbers 0, 1, 2, 3, 4, 5, 6.... to each output of the data. We then reduce this to the smallest expression of the data from its sum of products, which reveals what is hidden under the process of changing the data, regardless of the key value or process applied. We can use this method to find the possible data and encryption method used, and the possible prime. Since we can factor primes on a quantum machine and use quantum measurement over qubits much faster, we can break any form of encryption faster than it takes a digital system to arrive at the possible version of the data to get an idea of what the data is. This is called Shor's algorithm.

While we can simplify any set of outputs into the most basic function, this poses a significant security risk. No matter how hard we try to conceal information, it can be broken down and pattern-matched and even sequentially matched. This underscores the importance of vigilance in data security.

Anything can be reduced to just mapping and combining outputs in tables, or by traversing multi-dimensional configuration space.

The problem is that any process of encrypting information is irrelevant, no matter how we change the information around by using a combination of both methods.

Dose random exist

Using quantum computing, we can allow outside interactions from the environment to affect the qubits to get a truly random result.

It is not that the environment around us, or the noise or interference, is completely random. Even at the quantum level, noise is not entirely unpredictable but is complicated to correct. Our brains are hardwired to perceive randomness, as we cannot see everything that sums or combines to form the result. Therefore, we perceive many things as random, which is actually a complex series of effects that come together to produce a result, which can be combined and mapped into arithmetic mathematical calculations or code simulations.

At tiny scales, we observe randomness due to quantum interactions that occur across vast distances between entangled particles. Knowing the entire picture, including the state of all particles and which particles are entangled, is not possible. We would need a very powerful system that could simulate reality itself in terms of cause and effect to understand all quantum interactions, so we only use probabilities. Once we measure the state of the qubit, the probability becomes certain.

Another example of a truly random sample of data is as follows. We use time in the microsecond range on digital systems to create truly random results. We time a small code twice and divide the larger number by the smaller number, making a truly random result. The result is thought to be genuinely random as we do not know the state or temperature of the system running the code. In both cases, the results are probabilistic and considered to be genuinely random, yet they do not violate the law of cause and effect. The reason for it being random is that knowing all the interactions on the outside of the observed object to create the result is so vast.

Random-based and chaotic systems naturally self-organize into fractal patterns. All possibilities eventually happen evenly over time. We have generally used fractals to correct random noise or predict data, even on quantum computers, in the design of error correction code, because of quantum decoherence from the environment.

Rather than simulating everything, we calculate the number of possible changes that can occur from the outside world to the object as a probability that can affect our predicted result. Once we know the exact result of a large series of causes and effects, the number of probabilities decreases, and the percentage of where we know the answer increases. This same process applies to predicting the weather, as more information becomes known and the number of probabilities decreases.

A more physical and practical example is that there are six sides to a die, so we have six probabilities that it will likely land evenly on any side. The side the die lands on is a complex series of causes and effects resulting from all the surfaces it will hit, including the speed at which it moves, air pressure, and its initial position just before it is thrown. We can even take it a step further and include your thought pattern in cause and effect when you decide to throw the die, as even your thoughts and actions have cause-and-effect relationships. Again, this requires too much information to be known to predict the outcome before it happens. Therefore, we use a one-in-six probability and claim it is random, even though we know everything is a cause and effect of something else set in motion, even our thoughts.

It is challenging to prove the existence of randomness scientifically, as probabilities are used in situations where the number of interactions in cause-and-effect relationships is too large to compute, or we can not know the state of everything beforehand, making it essentially impossible to determine the outcome.

Building an uncertain outcome

Using real numbers from the real world is not enough, so we have to measure numbers and compare them to ensure they do not relate to anything. This is done to ensure that even a quantum computer can not align the values into an answer, or that we can not align the values into an answer in "AND", "NOT", and "OR", or use a quantized matrix. We can not prove that random is real in the natural world, as what we measure always breaks down. At very small scales, we encounter a degree of randomness due to quantum interactions that occur across vast distances between entangled particles, resulting in quantum decoherence. I do not see us building a system to simulate all of reality in terms of cause and effect at these tiny scales, as it would only improve particle simulations slightly in terms of how particles come to be from the smaller scale. We can utilize the quantum effect to generate random values if we allow quantum decoherence to influence our qubits from the outside world. We can measure numbers from the real world and check if they reverse to a function. If they do, we measure again until we obtain a value that does not reduce to the same code or function. This then makes it uncertain what the next value is. This ensures that the values are unrelated or belong to a much larger set or pattern that is bigger than the set itself. As the set gets bigger, it takes longer to sequence all the values in a truth table or even in a quantized matrix, but on a quantum computer, we can align any set in linear time into what it is, allowing us to build truly random generators that follow this rule, where the next value changes the type of function the data reduces into.

I do not see how we can have two systems match up using this form of encryption, so for the time being, as modern technology further develops, there is nothing that can really be hidden or kept secret anymore. This has both pros and cons. It has more pros for the general public than cons.

//Create Xor. Note Xor is a derived gate function.
function Xor( n1, n2 ) { return( ( ~n1 & n2 ) | ( n1 & ~n2 ) ); }
//The variable C will store the carry after each add.
var c = 0;
/*Addition translates to a basic xor from its maximally expanded form using a truth table of all input to output.
If both inputs are not 0 + 0, or 1 + 1 then they add to a sum of 1 (XOR).
The only thing the XOR excludes is if both inputs are 1 + 1 and must carry.
Carry is added the same way using xor with the number per digit.
When we remove all matching comparisons from the additions truth table, we end up with this function and pattern
when we reduce the truth table expansion to the smallest function.*/
function Adc8( n1, n2 )
{
//The first tow lines is "XOR" as "s1" (SUM total), and "AND" as "s2" (carries).
s1 = Xor( n1, n2 ); s2 = n1 & n2;
//These lines add carry per digit using xor, and shift to "s1".
o1 = Xor( s1, c ) & 1; c = ( ( s2 & 1 ) | ( s1 & c ) ) << 1;
o2 = Xor( s1, c ) & 2; c = ( ( s2 & 2 ) | ( s1 & c ) ) << 1;
o3 = Xor( s1, c ) & 4; c = ( ( s2 & 4 ) | ( s1 & c ) ) << 1;
o4 = Xor( s1, c ) & 8; c = ( ( s2 & 8 ) | ( s1 & c ) ) << 1;
o5 = Xor( s1, c ) & 16; c = ( ( s2 & 16 ) | ( s1 & c ) ) << 1;
o6 = Xor( s1, c ) & 32; c = ( ( s2 & 32 ) | ( s1 & c ) ) << 1;
o7 = Xor( s1, c ) & 64; c = ( ( s2 & 64 ) | ( s1 & c ) ) << 1;
o8 = Xor( s1, c ) & 128; c = ( ( s2 & 128 ) | ( s1 & c ) ) << 1;
//We move carry back Allowing this operation to be called recursively.
c >>= 8;
//We put the result together.
return( o1 | o2 | o3 | o4 | o5 | o6 | o7 | o8 );
}
//We set carry 0 to do a regular 8 bit add.
function Add8( n1, n2 ) { c = 0; return( Adc( n1, n2 ) ); }
//We set carry 0 to do a regular 8 bit add. Then do another add with the continued carry.
function Add16( n1, n2 ) { c = 0; return( Adc8( n1, n2 ) | ( Adc8( n1 >> 8, n2 >> 8 ) << 8 ) ); }
//Do a full 32 bit add.
function Add( n1, n2 ) { c = 0; return( Add16( n1, n2 ) | ( Adc8( n1 >> 16, n2 >> 16 ) << 16 ) | ( Adc8( n1 >> 24, n2 >> 24 ) << 24 ) ); }
//Do a full 32 bit subtract. To subtract we invert input B to our add. We also set carry 1 as it is inverted to act as barrow.
function Sub( n1, n2 ) { c = 1; n2 = ~n2; return( Add16( n1, n2 ) | ( Adc8( n1 >> 16, n2 >> 16 ) << 16 ) | ( Adc8( n1 >> 24, n2 >> 24 ) << 24 ) ); }
//15 bit multiply. Multiply using multiples of 2 using left shift.
function Mul15( n1, n2 )
{
o = ( ( n2 >> 0 ) & 1 ) && ( n1 << 0 );
o = Add( o, ( ( n2 >> 1 ) & 1 ) && ( n1 << 1 ) );
o = Add( o, ( ( n2 >> 2 ) & 1 ) && ( n1 << 2 ) );
o = Add( o, ( ( n2 >> 3 ) & 1 ) && ( n1 << 3 ) );
o = Add( o, ( ( n2 >> 4 ) & 1 ) && ( n1 << 4 ) );
o = Add( o, ( ( n2 >> 5 ) & 1 ) && ( n1 << 5 ) );
o = Add( o, ( ( n2 >> 6 ) & 1 ) && ( n1 << 6 ) );
o = Add( o, ( ( n2 >> 7 ) & 1 ) && ( n1 << 7 ) );
o = Add( o, ( ( n2 >> 8 ) & 1 ) && ( n1 << 8 ) );
o = Add( o, ( ( n2 >> 9 ) & 1 ) && ( n1 << 9 ) );
o = Add( o, ( ( n2 >> 10 ) & 1 ) && ( n1 << 10 ) );
o = Add( o, ( ( n2 >> 11 ) & 1 ) && ( n1 << 11 ) );
o = Add( o, ( ( n2 >> 12 ) & 1 ) && ( n1 << 12 ) );
o = Add( o, ( ( n2 >> 13 ) & 1 ) && ( n1 << 13 ) );
o = Add( o, ( ( n2 >> 14 ) & 1 ) &&( n1 << 14 ) );
return( o );
}
//Do a 15 bit divide. Subtract using multiple of two using left shift.
//Output is each subtractable output from carry acting as barrow.
//Also "n1" can be returned as division remainder.
function Div15( n1, n2 )
{
t = Sub( n1, n2 << 14 ); n1 = c && t || n1; o = c << 14;
t = Sub( n1, n2 << 13 ); n1 = c && t || n1; o |= c << 13;
t = Sub( n1, n2 << 12 ); n1 = c && t || n1; o |= c << 12;
t = Sub( n1, n2 << 11 ); n1 = c && t || n1; o |= c << 11;
t = Sub( n1, n2 << 10 ); n1 = c && t || n1; o |= c << 10;
t = Sub( n1, n2 << 9 ); n1 = c && t || n1; o |= c << 9;
t = Sub( n1, n2 << 8 ); n1 = c && t || n1; o |= c << 8;
t = Sub( n1, n2 << 7 ); n1 = c && t || n1; o |= c << 7;
t = Sub( n1, n2 << 6 ); n1 = c && t || n1; o |= c << 6;
t = Sub( n1, n2 << 5 ); n1 = c && t || n1; o |= c << 5;
t = Sub( n1, n2 << 4 ); n1 = c && t || n1; o |= c << 4;
t = Sub( n1, n2 << 3 ); n1 = c && t || n1; o |= c << 3;
t = Sub( n1, n2 << 2 ); n1 = c && t || n1; o |= c << 2;
t = Sub( n1, n2 << 1 ); n1 = c && t || n1; o |= c << 1;
t = Sub( n1, n2 << 0 ); n1 = c && t || n1; o |= c << 0;
return( o );
}
//Compare if two values are less than.
function lessThan( n1, n2 ) { Sub( n1, n2 ); return( !c ); }
//Compare if two values are grater than.
function graterThan( n1, n2 ) { Sub( n1, n2 ); return( c ); }
//Compare if two values are the same.
function equalTo( n1, n2 ) { Sub( 0, Sub( n1, n2 ) ); return( c ); }
/*Although I explained the math operations in this script using explanations, you can also create the same operations from
a truth table of all inputs and outputs. I showed how to do this using the ultra-fast log2 function see
https://gist.github.com/Recoskie/980acb54e9c0a1b6bf7614d47ebc0c46#file-bitwise-log2-js
It's important to remember that all math operations, no matter how complex they may seem, are simply combinations of the
basic logic gates: "AND", "OR", and "NOT". We compare our inputs using binary opposites by doubling our input values and
flipping the other using "NOT". We compare the input combinations with "AND" and move the place values of our digits to
match the results. We use "OR" to combine it togther.
We then remove all matching comparisons between inputs and outputs in the digit placements from inputs to outputs.
This leaves the absolute minimum that must happen to get the results from our inputs and reveals the pattern of our operation or data.
We then relate our defined math arithmetic functions to matching combinations in the function, further reducing to the
smallest calculation. This is the brute force method for breaking down information/data or finding the smallest function.*/
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment