bcd

How do ASCII Adjust and Decimal Adjust instructions work?

ε祈祈猫儿з 提交于 2019-11-30 21:18:50
I've been struggling with understanding the ASCII adjust instructions from x86 assembly language. I see all over the internet information telling me different things, but I guess it's just the same thing explained in a different form that I still don't get. Can anyone explain why in the pseudo-code of AAA , AAS we have to add, subtract 6 from the low-order nibble in AL? And can someone explain AAM , AAD and the Decimal adjust instructions pseudo-code in the Intel instruction set manuals too, why are they like that, what's the logic behind them? And at last, can someone give examples when these

Packing BCD to DPD: How to improve this amd64 assembly routine?

情到浓时终转凉″ 提交于 2019-11-30 18:45:02
I'm writing a routine to convert between BCD (4 bits per decimal digit) and Densely Packed Decimal (DPD) (10 bits per 3 decimal digits). DPD is further documented (with the suggestion for software to use lookup-tables) on Mike Cowlishaw's web site . This routine only ever requires the lower 16 bit of the registers it uses, yet for shorter instruction encoding I have used 32 bit instructions wherever possible. Is a speed penalty associated with code like: mov data,%eax # high 16 bit of data are cleared ... shl %al shr %eax or and $0x888,%edi # = 0000 a000 e000 i000 imul $0x0490,%di # = aei0

数字电路-二进制转BCD码

杀马特。学长 韩版系。学妹 提交于 2019-11-30 12:07:19
BCD码实际上就是将原本的十进制数的每一位用一个4位二进制数表示,每一位0-9。 二进制4位能够表达的数字范围是0-15。 由此可见BCD码的一段与普通四位二进制来表示十进制位有6的进制差 。所以这就是二进制转化为BCD码的关键所在。下面来讲讲主要步骤: 先预估十进制数的位数,预先给BCD码分好段,此时的BCD码为空无任何数据 接着讲原本的二进制数的最高位一端从BCD码的最低位端插入,也可以看作是将二进制数与BCD码同时左移每次将二进制的头砍掉补到BCD码最后,但个人觉得逐位插入更加形象~ 关键来啦 ,之前提到过 BCD码每一个四位二进制(表示十进制的一位)存储范围是0-9,而原本的二进制四位的存储范围是0-15 ,所以二进制在逐位后端插入BCD码时,若BCD码的某一段(四位二进制)>9 则我们手动给该段+6强行使其进位满足BCD码的存储范围的要求--简单的说就是一直同时左移,某一段>9就+6。 这里可以进行优化,就是常用的 +3(+011)左移法 ,即在左移之前先判断本段是否>4(>0100),若大于则左移之后必然会超过9,所以在左移之前先在该段+3(+011),那么左移之后就已经实现了进位啦。 就这么一直将二进制逐位从底端插入(左移),同时保持每段的范围在0-9内最终就能得到BCD码聊。 来源: https://www.cnblogs.com/GorgeousBankarian

How do ASCII Adjust and Decimal Adjust instructions work?

梦想与她 提交于 2019-11-30 05:17:19
问题 I've been struggling with understanding the ASCII adjust instructions from x86 assembly language. I see all over the internet information telling me different things, but I guess it's just the same thing explained in a different form that I still don't get. Can anyone explain why in the pseudo-code of AAA, AAS we have to add, subtract 6 from the low-order nibble in AL? And can someone explain AAM , AAD and the Decimal adjust instructions pseudo-code in the Intel instruction set manuals too,

Packing BCD to DPD: How to improve this amd64 assembly routine?

左心房为你撑大大i 提交于 2019-11-30 03:18:42
问题 I'm writing a routine to convert between BCD (4 bits per decimal digit) and Densely Packed Decimal (DPD) (10 bits per 3 decimal digits). DPD is further documented (with the suggestion for software to use lookup-tables) on Mike Cowlishaw's web site. This routine only ever requires the lower 16 bit of the registers it uses, yet for shorter instruction encoding I have used 32 bit instructions wherever possible. Is a speed penalty associated with code like: mov data,%eax # high 16 bit of data are

基础项目(4)二级制转换BCD

我只是一个虾纸丫 提交于 2019-11-29 13:17:48
写在 前面的 话 我们的数据在运算或者存储的时候,一般都是以二进制的格式存在的。但是在很多情况下,我们需要将运算结果显示到某种显示设备上,如果直接以二进制的形式来显示的话,会非常不便于我们查看。因此,我们需要首先将二进制数转换为十进制数再进行显示。二进制到十进制的转换有很多种方法。本节,梦翼师兄和大家一起学习一种国外目前最为流行的转换方法 -逐步移位法。通过这种方式,我们不但可以在没有周期差的情况下实现数据格式的转换,同时我们的资源占用量也是相当小的。 基本 概念 BCD码(Binary-Coded Decimal‎)也 称二进码十进数或二 - 十进制 代码。用 4位二进制数来表示1位 十进制数 中的 0~9这10个数码。BCD码这种编码形式利用了四个位元来储存一个十进制的数码,使二进制和十进制之间的转换得以快捷的进行。这种编码技巧 在 FPGA中经常用到,如矩阵键盘输入的数据需要在数码管上显示的时候,矩阵键盘输入的数字是二进制数,而数码管上需要显示的是十进制数,所以需要将二进制数转换成BCD码,这在我们以后的设计中会经常遇到。 7.3 .3逐步移位法原理 在本设计中,我们使用逐步移位法来实现二进制数向BCD码的转换,在设计之前,我们先来了解一下二进制数向BCD码转换的原理-逐步移位法: 变量定义: B:需要转换的二进制数位宽 D:转换后的BCD码位宽 (其中

Convert integer from (pure) binary to BCD

戏子无情 提交于 2019-11-29 12:35:08
I'm to stupid right now to solve this problem... I get a BCD number (every digit is an own 4Bit representation) For example, what I want: Input: 202 (hex) == 514 (dec) Output: BCD 0x415 Input: 0x202 Bit-representation: 0010 0000 0010 = 514 What have I tried: unsigned int uiValue = 0x202; unsigned int uiResult = 0; unsigned int uiMultiplier = 1; unsigned int uiDigit = 0; // get the dec bcd value while ( uiValue > 0 ) { uiDigit= uiValue & 0x0F; uiValue >>= 4; uiResult += uiMultiplier * uiDigit; uiMultiplier *= 10; } But I know that's very wrong this would be 202 in Bit representation and then

Unsigned Integer to BCD conversion?

。_饼干妹妹 提交于 2019-11-28 13:33:19
I know you can use this table to convert decimal to BCD: 0 0000 1 0001 2 0010 3 0011 4 0100 5 0101 6 0110 7 0111 8 1000 9 1001 Is there a equation for this conversion or you have to just use the table? Im trying to write some code for this conversion but Im not sure how to do the math for it. Suggestions? You know the Binary numeral system , don't you? Especially have a look at this chapter . EDIT: Also note KFro's comment that the lower nibble (= 4 bits) of the binary ASCII representation of numerals is in BCD. This makes conversions BCD <-> ASCII very easy as you just have to add/remove the

Converting a int to a BCD byte array

大城市里の小女人 提交于 2019-11-28 08:11:19
问题 I want to convert an int to a byte[2] array using BCD. The int in question will come from DateTime representing the Year and must be converted to two bytes. Is there any pre-made function that does this or can you give me a simple way of doing this? example: int year = 2010 would output: byte[2]{0x20, 0x10}; 回答1: static byte[] Year2Bcd(int year) { if (year < 0 || year > 9999) throw new ArgumentException(); int bcd = 0; for (int digit = 0; digit < 4; ++digit) { int nibble = year % 10; bcd |=