Why binary and not ternary computing?

对着背影说爱祢 提交于 2019-11-27 17:47:16
starblue
  • It is much harder to build components that use more than two states/levels/whatever. For example, the transistors used in logic are either closed and don't conduct at all, or wide open. Having them half open would require much more precision and use extra power. Nevertheless, sometimes more states are used for packing more data, but rarely (e.g. modern NAND flash memory, modulation in modems).

  • If you use more than two states you need to be compatible to binary, because the rest of the world uses it. Three is out because the conversion to binary would require expensive multiplication or division with remainder. Instead you go directly to four or a higher power of two.

These are practical reasons why it is not done, but mathematically it is perfectly possible to build a computer on ternary logic.

screwballl

Lots of misinformation here. Binary has a simple on/off switch. Trinary/Ternary can use one of 2 modes: Balanced aka -1, 0, +1, or unbalanced 0, 1, 2, but is not simply on or off, or more correctly, has 2 "on" states.

With the expansion of fiber optics and expansive hardware, ternary would actually take us to a much more expansive and faster state for a much lower cost. Modern coding could still be used (much like 32 bit software is still able to be used on 64 bit hardware) in combination with newer ternary codes, at least initially. Just need the early hardware to check which piece of info coming through, or the software to announce ahead of time if it is a bit or a trit. Code could be sent through 3 pieces at a time instead of the modern 2 for the same or less power.

With fiber optic hardware, instead of the modern on/off binary process, it would be determined by 0=off and the other 2 switches as orthogonal polarizations of light. As for security, this could actually be made massively more secure for the individual as each PC or even user is set to a specific polarization "specs" that is only to be sent/received between the user and the destination. The same would go for the "gates" with other hardware. They would not need to be bigger, just have the option for 3 possibilities instead of 2.

There has even been some theories and even possibly starting some tests on the Josephson Effect which would allow for ternary memory cells, using circulating superconducting currents, either clockwise, counterclockwise, or off.

When compared directly, Ternary is the integer base with the highest radix economy, followed closely by binary and quaternary. Even some modern systems use a type of ternary logic, aka SQL which implements ternary logic as a means of handling NULL field content. SQL uses NULL to represent missing data in a database. If a field contains no defined value, SQL assumes this means that an actual value exists, but that the value is not currently recorded in the database. Note that a missing value is not the same as either a numeric value of zero, or a string value of zero length. Comparing anything to NULL—even another NULL—results in an UNKNOWN truth state. For example, the SQL expression "City = 'Paris'" resolves to FALSE for a record with "Chicago" in the City field, but it resolves to UNKNOWN for a record with a NULL City field. In other words, to SQL, an undefined field represents potentially any possible value: a missing city might or might not represent Paris. This is where trinary logic is used with modern day binary systems, albeit crude.

Of course we'd be able to hold more data per bit, just like our decimal number system can hold far more data in a single digit.

But that also increases complexity. Binary behaves very nicely in many cases, making it remarkably simple to manipulate. The logic for a binary adder is far simpler than one for ternary numbers (or for that matter, decimal ones).

You wouldn't magically be able to store or process more information. The hardware would have to be so much bigger and more complex that it'd more than offset the larger capacity.

A lot of it has to do with the fact that ultimately, bits are represented as electrical impulses, and it's easier to build hardware that simply differentiates between "charged" and "no charge", and to easily detect transitions between states. A system utilizing three states has to be a bit more exact in differentiating between "charged", "partly charged", and "no charge". Besides that, the "charged" state is not constant in electronics: the energy starts to "bleed" eventually, so a "charged" state varies in actual "level" of energy. In a 3-state system, this would have to be taken into account, too.

Well, for one thing, there is no smaller unit of information than a bit. operating on bits is the most basic and fundamental way of treating information.

Perhaps a stronger reason is because its much easier to make electrical components that have two stable states, rather than three.

Aside: Your math is a bit off. there are approximately 101.4 binary digits in a 64 digit trinary number. Explanation: the largest 64 digit trinary number is 3433683820292512484657849089280 (3^64-1). to represent this in binary, it requires 102 bits: 101011010101101101010010101111100011110111100100110010001001111000110001111001011111101011110100000000

This is easy to understand, log2(3^64) is about 101.4376

There are also theories that suggest that fiber optics could use light frequencies (i.e.color) to differentiate states thereby allowing a near infinite (depending on resolution of the detection unit) number of base possibilities.

Logic gates are definitely feesible for any base but let's use trinary for an example:

For a trinary XOR gate, it could be exclusive to one (or any) of the three states it is comparing OR one of the other three states. It could also tie two of the three states together for a binary output. The possibilities increase literally exponentially. Of course, this would require more complex hardware and software but the complexity should decrease the size and more importantly the power (read heat). There is even talk of using trinary in a nano computing system where there is a microscopic "bump, a "hole" or "unchanged" to represent the three states.

Right now, we are in sort of a QWERTY type problem. Qwerty was designed to be inefficient because of a problem with typing mechanics that no longer exists but everyone who uses keyboards today learned to use the qwerty system and no one wants to change it. Trinary and higher bases will someday break through this issue when we reach the physical limitations of binary computing. Maybe not for another twenty years but we all know that we cannot continue doubling our capability every year and a half forever.

The ternary equivalent of the 'bit' just caused too much outrage!

Screwball's reply is correct and corrects some of the misstatements offered here. Those who replied about fractional positive values completely missed the concept of the ternary system which is based on 0, +1 and -1. When first constructed by the Russians in the 1950's, the competition between USSR and USA was intense. I suspect that politics between the two had a lot to do with the USA's binary's eventual popularity over the USSR's ternary.

From what I've read, there are some ternary computers in use. Moscow has some in use at their university and IBM has some in its labs. There are references to others, but I couldn't distinguish how serious they are, or if they are just for experimentation or play. Apparently they are much less costly to build and they use far less energy to operate.

Another major hurdle is that there are a much larger number of logic operations that would need to be defined. The number of operators is found by the formula b^(b^i) where b is the base and i is the number of inputs. For a two input binary system this works out to 16 possible operators. Not all of this are usually implemented in gates and some gates cover more than one condition, however all of them can be implemented with three or less of the standard gates. For a two input ternary system this number is much higher about 19683. While several of these gates would be similar to one another, ultimately the ability to design basic circuits manually would be almost impossible. While even a freshmen engineering student is able to design basic binary circuits in their head.

I believe it is for two reasons (please correct me if I'm wrong): first because the value of 0 and 1 is not really no-current/current or something alike. The noise is quite high, and the electronic components must be able to distinguish that a value fluctuating from, say, 0.0 to 0.4 is a zero, and from 0.7 to 1.2 is a one. If you add more levels, you are basically making this distinction more difficult.

Second: all the boolean logic would immediately cease to make sense. And since you can implement sum out of boolean gates, and from sum, every other mathematical operation, it is nicer to have something that maps nicely into practical use for math. What would be the boolean truth table for an arbitrary pair between false/maybe/true?

A lot of it has to do, I am pretty sure, with error checking of digital signals. For example, in quantum computing this task is nearly impossible, but not impossible, to achieve do to the non-cloning principle, but also due to the fact that there are an increased number of states. For two states the process of error checking is not trivial, but it is relatively easy. For three states error checking becomes infinitely harder. This is also why analogue computers with an nearly infinite amount of states were ruled out.

If you are interested in Quantum Computing though look into sphere packing and quantum error checking, some pretty neat stuff there.

I think that ternary would be more efficient. It just never became popular. Binary took the stage and now a switch to ternary would be a change of everything we know.

To have a circuit operate in anything but binary, you must define how the other states will be represented. You've proposed a system of -1, 0, and +1, but transistors don't work that way, they like to have their voltage or current going in one direction only. To make a 3-state bit would take 2 transistors, but you could make 2 binary bits out of the same transistors and have 4 states instead of 3. Binary is just more practical at the low level.

If you tried to set thresholds on the circuit and use 0, +1, +2 instead, you run into a different set of problems. I don't know enough to go into details, but for logic circuits it's just more trouble than it's worth, especially when the industry is completely dedicated to binary already.

There is one area where multiple levels are used to get more than 2 states per bit: MLC flash memories. Even there the number of levels will be a power of 2 so that the output can be easily converted to binary for use by the rest of the system.

Sure but a ternary 'bit' (a tet?) would be more complicated, you'd still be storing the same amount of information, just in base3 instead of base2, and the power if two-state components is the simplicity. Why not just go ahead and make a 10-state base10

Binary computing is related to binary AND, OR and NOT gates, their immense simplicity and ability to be combined into arbitrarily complex structures. They are the cornerstone of literally all the processing your computer does.

If there was a serious case to switch to ternary or decimal then they would. It isn't a case of 'they tried it like that and it just stuck'

If we use 3 states, then the main problem arising due to this are

  1. If we use unipolar signal then the noise margin will reduce, hence increasing the bit error rate.
  2. For unipolar signal to keep the noise margin constant we have to increase the power supply and hence the power consumption will increase.
  3. If we use bipolar signal then the total swing of the signal will increase thereby increasing the losses.
  4. Extra layer in multilayer PCB will have to be added to account for negative swing in the bipolar signals.

Hope i am convincing

I think it has more to do with programmability, conditional statements and the efficient use and functionality of transistors than anything else. It might be obvious that a nested IF is true if there is a current through a circuit, but how would a program know what to do if the solution could be achieved by a thousand different routes? It's interesting in regard to AI, where memory and learning are far more important than brute computational power.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!