Bitshifting std_logic_vector while keep precision and conversion to signed

≯℡__Kan透↙ 提交于 2019-12-08 09:33:09

问题


In VHDL I want to take a 14 bit input and append '00' on the end to give me a 16 bit number which is the 14 bit input multiplied by 4 and then put this into a 17 bit signed variable such that it is positive (the input is always positive). How should I go about this?

like this? shiftedInput <= to_signed('0' & input & '00', 17);

Or maybe like this? shiftedInput <= to_signed(input sll 2, 17);

Or this? shiftedInput <= to_signed(input & '00', 17);

Does it see that the std_logic_vector it's getting is 16 bit and the signed variable is 17 bit and therefore assume the most significant bit (the singing bit) is 0?

Or do I have to do this? shiftedInput <= to_signed('0' & input sll 2, 17);

e.g. If I read in the 14 bit number 17 as a std_logic_vector [i.e. (00 0000 0001 0001)] it should be converted to the signed number +68. [i.e. (0 0000 0000 0100 0100)]


回答1:


std_logic_vector is compatible with the type signed of numeric_std. So, the type conversion function is signed (not to_signed that converts between integers and vectors):

shiftedInput <= signed('0' & input & "00");

should make it. Note the "00" instead of your '00'. Bit strings are double-quoted while bits are single-quoted.



来源:https://stackoverflow.com/questions/39066106/bitshifting-std-logic-vector-while-keep-precision-and-conversion-to-signed

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!