How can I implement this python code in c#?
Python code:
print(str(int(str(\"e60f553e42aa44aebf1d6723b0be7541\"), 16)))
Result:
Primitive types (such as Int32
, Int64
) have a finite length that it's not enough for such big number. For example:
Data type Maximum positive value Int32 2,147,483,647 UInt32 4,294,967,295 Int64 9,223,372,036,854,775,808 UInt64 18,446,744,073,709,551,615 Your number 305,802,052,421,002,911,840,647,389,720,929,531,201
In this case to represent that number you would need 128 bits. With .NET Framework 4.0 there is a new data type for arbitrarily sized integer numbers System.Numerics.BigInteger. You do not need to specify any size because it'll be inferred by the number itself (it means that you may even get an OutOfMemoryException
when you perform, for example, a multiplication of two very big numbers).
To come back to your question, first parse your hexadecimal number:
string bigNumberAsText = "e60f553e42aa44aebf1d6723b0be7541";
BigInteger bigNumber = BigInteger.Parse(bigNumberAsText,
NumberStyles.AllowHexSpecifier);
Then simply print it to console:
Console.WriteLine(bigNumber.ToString());
You may be interested to calculate how many bits you need to represent an arbitrary number, use this function (if I remember well original implementation comes from C Numerical Recipes):
public static uint GetNeededBitsToRepresentInteger(BigInteger value)
{
uint neededBits = 0;
while (value != 0)
{
value >>= 1;
++neededBits;
}
return neededBits;
}
Then to calculate the required size of a number wrote as string:
public static uint GetNeededBitsToRepresentInteger(string value,
NumberStyles numberStyle = NumberStyles.None)
{
return GetNeededBitsToRepresentInteger(
BigInteger.Parse(value, numberStyle));
}