• Welcome to Valhalla Legends Archive.
 

Trinary Code

Started by MetaL MilitiA, December 30, 2004, 01:08 PM

Previous topic - Next topic
|

Banana fanna fo fanna

This is a silly thread.

R.a.B.B.i.T

Quote from: Banana fanna fo fanna on May 11, 2005, 07:30 PM
This is a silly thread.
Indeed, especially considering I haven't seen any new posts from MetaL in about 4 months

Adron

#77
Quote from: rabbit on May 11, 2005, 02:59 PM
It's advantage is that when converted to an integer its value is greater than that of 'binary'.  That's about all, though.

Well, one could also put it this way:

Assuming binary gate = 2 transistors, trinary gate = 6 transistors, then given X transistors, you would be able to process X/6=n trinary bits, and X/2=3n binary bits).

When you convert the maximum trinary value to an integer, you get 3**n - 1, and when you convert the maximum binary value to an integer, you get 2**3n - 1, or 8**n - 1. This means that when converted to an integer, the binary value will be greater than the trinary... :)

R.a.B.B.i.T

I was stating the only instance I could think of in which 'trinary' is greater.  Thank you for proving my point :)

MetaL MilitiA

#79
Quote from: MyndFyre on May 03, 2005, 06:14 PM
Quote from: Adron on May 03, 2005, 01:09 PM
Yes, but even if his was actually bigger in binary bits, I'm having trouble with his measure of "more efficient". If how good something is is measured in the number of possible combinations, a single trinary bit is 50% more efficient than a single binary bit, a trinary byte is 2563% more efficient than a binary, and a trinary word would be even many times more more efficient than a binary. Just like a 1 byte larger hard drive is 256 times more efficient since you can store 256 times as many combinations....

But it would take more space to store, thereby negating the added "efficiency."
Not if you find a way to store 3 possible values (not exactly sure how you could do this, but I'm sure there's a way).

Everybodies saying trinary code would be less efficient than binary. Let's take a look at Boolean Algebra which can give a good example of how much of an increase Trinary Code would have in performance. With two input variables (0, 1; binary code), there's 16 possible functions. With 3 input variables (0, 1, 2; trinary code), there's 256 possible functions. Since there are 16x more possible functions for the same amount of bits (or would it be trits?), you would be able to for example handle 256 colors in trinary code just as fast as handleing 16 colors. In theory, if you could store 3 possible values as well, you would be able to store 256 possible values in the same amount of space as you could 16 possible values. This would theoretically increase performance by 16x (1600%; and also should not depend on the workload).

I havn't been able to reveiw all the posts in this thread yet, but I will soon. This post was simply made to explain the possible performance increase.

EDIT:
Quote from: Adron on May 09, 2005, 06:39 AM
Quote from: Topaz on May 09, 2005, 02:01 AM
The whole issue with Trinary code is that you have to change everything, plus more storage space because of 50% increase of voltage.

If you were to convert the current amount of space/equipment from binary -> trinary, you would still gain a 25% increase in efficiency, without losing anything in particular.

It's kind of hard to explain =/

The whole issue with Trinary code is that a basic binary gate can be made with two transistors. The transistors can be made very small and inaccurate because they only have to be all-on or all-off. When you build something analog, something where transistors can be other than either full open or all closed, you need to design them with more precision. This makes them bigger.

You further cannot use the same circuit that a binary gate uses. The basic building block of a binary code computer is an inverting amplifier with "infinite" amplification. That's a "not gate" or "inverter". Inverters are used all over the place in digital circuits to amplify the signal and remove noise.

An inverter consists of one n-mos and one p-mos transistor. Depending on the input signal applied, either the n-mos will be open and the p-mos closed, or the p-mos will be open and the n-mos closed. Between clock signals (i.e. when the input isn't changing), no current will flow.


When getting technical, it's easier to think about it in terms of an analog circuit where your analog ground is at a 2.5V level. This turns binary 1 into +2.5V and binary 0 into -2.5V. The basic inverter, "infinite" amplification inverting amplifier might in practise have an amplification of say 100 times. An input of -2.5V will make the inverter want to output -250V, but the clipping of the supply voltage causes it to become just -2.5V. An input of -1.5V will generate an output of -150V which is also clipped to just -2.5V, meaning that a whole Volt of noise has just been removed.

If you were to use a trinary signal, there would have to be a middle state where the voltage was right between Vdd and Vss, i.e. 0V in the analog scale. The inverter as described above would not remove noise for that. If you have 0.1V of noise on that signal, you'll be inputting 0.1V to the inverter, which will generate 10V on the output, which will be clipped to 2.5V, which is now a logic high. The value of the bit has been destroyed.

To accurately reproduce and de-noise such bits, you need to design a much more complicated circuit, consisting of many more transistors. I haven't seen a minimal one, so I can't say for sure how big, but perhaps 5-10 transistors could do it if you were clever. Here's an example of an analog amplifier, you can see the circuit diagram in one of the first pages.


In addition to a trinary logic gate being bigger than a binary one, power consumption becomes a big problem. To generate the intermediate voltage, you need to have a bias current flowing through the amplifier at all times. Outputting the mid level voltage means that both of the output transistors would be conducting current at the same time. This would increase the power consumption of the circuit extremely much.


Another problem, connected to the power consumption one, is speed. To limit the power consumption of an analog amplifier, you limit the current flowing through the two output transistors when a mid-level voltage is being output. This limit will also limit the maximum output current of the amplifier.

The binary inverter is designed to drive a lot of current through its transistors. That is not a problem because the two output transistors will never be conducting current at the same time more than momentarily. The more current you can generate from the output of the gate, the faster the input of the next gate can be charged (stray capacitances).

To make the chip survive the heat generated, you'll have to limit the bias current through the output transistors. But then you'll be limiting the output current and the speed at which the input of the next gate can be charged, i.e. increasing the gate delay.


Conclusion: Trinary logic may be possible, and you'll have more information processed in comparison to the number of logic gates used. However, each logic gate will be at least three times as big, and your power consumption will be up perhaps 10-100 times, as will your gate delay.

Maybe you could turn a Pentium IV from a 32-bit binary processor to a 32-bit trinary processor (50-bit binary equivalent) in a 5x5 inch package, with a power consumption of 3 kW and a clock speed of 25 MHz. Would you want to?
Simple answer: an analog amplifier won't work, and I don't know why everybody was stuck with that solution. I think I have a solution in mind, I'll post it soon.

MetaL MilitiA

#80
Note: If double posting is against the rules, tell me, I'd be glad to combine this with my above post.

My solution consists of multiple binary channels simultaneously transmitting bits (it probably hurts reading that). With this solution, we would be forced to skip to quadratic code, as it would be a waste to have 1.5 channels. Instead of only increasing performance by 16 times like trinary code would, quadratic code would increase it by 4096 times (Four input variables create 2**(2x2x2x2) or 2 to the 16 power, or 65,536, where 65,536/16=4096).

Discuss?

MetaL MilitiA

#81
Quote from: Adron on May 11, 2005, 11:44 PM
Quote from: rabbit on May 11, 2005, 02:59 PM
It's advantage is that when converted to an integer its value is greater than that of 'binary'.  That's about all, though.

Well, one could also put it this way:

Assuming binary gate = 2 transistors, trinary gate = 6 transistors, then given X transistors, you would be able to process X/6=n trinary bits, and X/2=3n binary bits).

When you convert the maximum trinary value to an integer, you get 3**n - 1, and when you convert the maximum binary value to an integer, you get 2**3n - 1, or 8**n - 1. This means that when converted to an integer, the binary value will be greater than the trinary... :)
Rabbit is correct, when converted to an integer it's value is greater than that of binary. Your post proves the following: with my idea, a quadratic gate would = 4 transistors, but cutting in half the amount of transistors would negate anything changed, which would make quadratic code = binary code in this case. If we could somehow use something other than something that generates heat, we wouldn't have to use transistors. Yet if you think about it even more, if you have more channels in a system that doesn't generate heat, that would be doubling the size, negating the purpose of it again. The only possible way to gain something out of trinary code, is to have 3 possible values on one channel that, and have the entire system not generate heat.

Quote from: Adron on December 31, 2004, 05:24 AM
Trinary would be way slower, and would waste a lot more space than binary.
Quote from: Banana fanna fo fanna on February 22, 2005, 07:07 PM
Trinary code is a silly idea, IMO.
Quote from: Arta[vL] on May 09, 2005, 05:23 PM
No, you're missing the point too. Trinary is not better than binary. It is not preferable. It would not be an improvement. It is not a beneficial thing to develop or use.
In-conclusion, you're all incorrect. Trinary would be 16x faster and save 16x the space if implemented into a system that does not generate heat and can have three or more values on one data stream. This is the only possibility for trinary code, meaning it's likely impossible (I've also seen many topics over the internet about this, and none of them represent my final conclusion, so I'm thinking about writing a paper for this and publishing it over the internet).

I'd also like to apologize for my behavier and for my ignorance I had at the beginning of this thread.

Adron

Quote from: MetaL MilitiA on June 15, 2005, 10:48 PM
Note: If double posting is against the rules, tell me, I'd be glad to combine this with my above post.

My solution consists of multiple binary channels simultaneously transmitting bits (it probably hurts reading that). With this solution, we would be forced to skip to quadratic code, as it would be a waste to have 1.5 channels. Instead of only increasing performance by 16 times like trinary code would, quadratic code would increase it by 4096 times (Four input variables create 2**(2x2x2x2) or 2 to the 16 power, or 65,536, where 65,536/16=4096).

Discuss?

Having multiple binary channels simultaneously transmitting bits is still just a binary system. Today's computers use 32 or 64 binary channels simultaneously transmitting bits. Those systems are called 32-bit processors and 64-bit processors. This is nothing new or revolutionary?

Aside from that, your calculations on the efficiency of systems as a function of the number of possible functions are flawed. You talk about storing 256 possible values or storing 16 possible values. 16 possible values correspond to 4 binary bits (2**4 == 16) and 4 trinary bits correspond to 81 possible values (3**4 == 81). Using your quarternary bits, you get 256 possible values though (4**4 == 256).

To have the same number of possible values with binary bits as with quarternary bits, you need 8 quarternary bits for 4 binary bits (2**8 == 4**4 == 256) meaning that the binary bits are 50% less efficient than the quarternary bits.

Interestingly enough, you created each quarternary bit using 2 binary channels, so each binary bit takes up 50% less space than a quarternary bit. Net result: No efficiency improvement!

MetaL MilitiA

#83
Quote from: Adron on June 16, 2005, 11:53 AM
Quote from: MetaL MilitiA on June 15, 2005, 10:48 PM
Note: If double posting is against the rules, tell me, I'd be glad to combine this with my above post.

My solution consists of multiple binary channels simultaneously transmitting bits (it probably hurts reading that). With this solution, we would be forced to skip to quadratic code, as it would be a waste to have 1.5 channels. Instead of only increasing performance by 16 times like trinary code would, quadratic code would increase it by 4096 times (Four input variables create 2**(2x2x2x2) or 2 to the 16 power, or 65,536, where 65,536/16=4096).

Discuss?

Having multiple binary channels simultaneously transmitting bits is still just a binary system. Today's computers use 32 or 64 binary channels simultaneously transmitting bits. Those systems are called 32-bit processors and 64-bit processors. This is nothing new or revolutionary?
If you actually read my post above yours, you would notice you're repeating exactly what I said.
Quote
Aside from that, your calculations on the efficiency of systems as a function of the number of possible functions are flawed. You talk about storing 256 possible values or storing 16 possible values. 16 possible values correspond to 4 binary bits (2**4 == 16) and 4 trinary bits correspond to 81 possible values (3**4 == 81). Using your quarternary bits, you get 256 possible values though (4**4 == 256).
I also discovered my calculations were flawed last night, yet wasn't able to post the following. This is how trinary code relates to binary code: 1 bit of trinary (3**1=3) is 50% more effecient than 1 bit of binary (2**1=2). 2 bits of trinary is (3**2=9) is 2.25x more effecient than 2 bits of binary (2**2=4). 3 bits of trinary (3**3=27) is 3.375x more effecient than 3 bits of binary (2**3=8). Now, let's jump to 8 bits. 8 bits of trinary (3**8=6561) is 25.62890625x more effecient that 8 bits of binary (2**8=256). The number will just keep growing exponentially like that, so the performance increase will always depend on the workload.
QuoteTo have the same number of possible values with binary bits as with quarternary bits, you need 8 quarternary bits for 4 binary bits (2**8 == 4**4 == 256) meaning that the binary bits are 50% less efficient than the quarternary bits.

Interestingly enough, you created each quarternary bit using 2 binary channels, so each binary bit takes up 50% less space than a quarternary bit. Net result: No efficiency improvement!
If you actually read my post above yours, you would notice you're repeating exactly what I said.

R.a.B.B.i.T

Trinary is only more efficient in a mathematical sense.  Actually determining the values would degrade the quality severely.  The point of binary is that it's either "on" or "off", with trinary there is "on", "off", and "that place in the middle...you know the one".  Creating circuits/components which could accurately and speedily determine the state (1, 2, 3) with no margin of error would cause said degredation.  Shh.  You've been proven wrong REPEATEDLY, please just drop it.

MyndFyre

Metal, the point isn't whether you can represent more values, because you can.  As rabbit said, there simply isn't the technology available to make the stuff more efficient.

You can use binary and have a fair bit of fluctionation in the voltages and still reliably tell if it's a digital signal.  We would either have to use a LOT more electricity (which would cause heat) or be much more precise in measurements in order for a single signal to be both digital and tri-state.

Eventually, it may be possible -- probably some time after we discover how to travel faster than light.  :P
QuoteEvery generation of humans believed it had all the answers it needed, except for a few mysteries they assumed would be solved at any moment. And they all believed their ancestors were simplistic and deluded. What are the odds that you are the first generation of humans who will understand reality?

After 3 years, it's on the horizon.  The new JinxBot, and BN#, the managed Battle.net Client library.

Quote from: chyea on January 16, 2009, 05:05 PM
You've just located global warming.

MetaL MilitiA

#86
Quote from: rabbit on June 16, 2005, 04:44 PM
Trinary is only more efficient in a mathematical sense.  Actually determining the values would degrade the quality severely.  The point of binary is that it's either "on" or "off", with trinary there is "on", "off", and "that place in the middle...you know the one".  Creating circuits/components which could accurately and speedily determine the state (1, 2, 3) with no margin of error would cause said degredation.  Shh.  You've been proven wrong REPEATEDLY, please just drop it.

Did you not read what I said either? You see that post directly above Adron's? Let's review it!
Inside of that post, I said the following.

Quote...but cutting in half the amount of transistors would negate anything changed...

This means that I know my system would not work.
Now, this following post shows the only circumstance that trinary code would make an improvement.

QuoteTrinary would be [faster and save more space] if implemented into a system that does not generate heat and can have three or more values on one data stream. This is the only possibility [to implement] trinary code, meaning it's likely impossible [to ever occur].

Now, let's review! I prove my theory for multiple binary channels wrong before anybody else does, yet other people are still trying to prove it wrong, like you! So uhh, don't tell me to drop it, I've totally dropped even the possibility of it a long time ago. You guys drop it. You're the ones that keep comming and trying to prove it wrong again when it's already been proven wrong.

QuoteMetal, the point isn't whether you can represent more values, because you can.  As rabbit said, there simply isn't the technology available to make the stuff more efficient.

You can use binary and have a fair bit of fluctionation in the voltages and still reliably tell if it's a digital signal.  We would either have to use a LOT more electricity (which would cause heat) or be much more precise in measurements in order for a single signal to be both digital and tri-state.

Eventually, it may be possible -- probably some time after we discover how to travel faster than light.  Tongue

MyndFyre, if you actually read and comprehended my post above Adron's, you would realize you repeated exactly what I said, but missed one thing. You won't be able to implement trinary code into a system that involves electricity as that would require another transistor, which would increase heat. With more heat on the CPU, you would have to get rid of some of the other transistors that helped with speed to keep it the same size (meaning speed reduction), which would simply negate any improvement.

I don't know how many times I'm going to have to say this before people understand, so I'll make the font a little bigger. Read it slowly multiple times if you have to, just make sure you comprehend those 2 sentences before your next post.

Trinary coding would be faster and save more space than binary if implemented into a system that does not generate heat and can have three or more values on one data stream. This is the only possibility for trinary code, meaning it's probably impossible to ever happen.

R.a.B.B.i.T

Then DROP IT ALREADY.  Stop posting if we have it right and you know it.

MetaL MilitiA

#88
Here, let's take a look at exactly what happend!

  1. I came back, and decided to make a final conclusion that wasn't previously posted.
  2. You guys come back trying to prove something wrong that I not only already proved wrong, yet I don't even agree with anymore.
  3. I make a couple posts telling you guys that you're just repeating exactly what I said.
  4. You come back like a complete moron making a post that proves you didn't read one thing I say, and you tell me to drop it.

Now, none of your posts are even relevant to anything in my final conclusion and nobody has came up with the conclusion that I had, yet you say I agree with you? Then you have the nerve to say what you just said, even though you were just a complete idiot this entire topic? Just for fun now, I'm probably going to make a post in the "Stupid People Arguing About Stupid Things Forum" consisting of every single one of your posts in this thread, and a follow up for each of them on why it makes you look like a moron.

EDIT: You know, I'm not even going to go through the trouble; people must already realize how much of a dumbass you really are (it's really hard to miss), so why even bother?

R.a.B.B.i.T

Quote from: MetaL MilitiA on June 16, 2005, 10:02 PM
Now, let's review! I prove my theory for multiple binary channels wrong before anybody else does, yet other people are still trying to prove it wrong, like you! So uhh, don't tell me to drop it, I've totally dropped even the possibility of it a long time ago. You guys drop it. You're the ones that keep comming and trying to prove it wrong again when it's already been proven wrong.
1. We proved you wrong.  Repeatedly.
2. My last post was May 11.  You came June 15, more than a month later, and posted again, so don't tell us "we came back and keep coming".
3. If you've dropped it, stfu.  YOU TRIPPLE POSTED.
4. See #3.

@Mod: Lock please.

|