UWP ECDSP Signature

巧了我就是萌 提交于 2019-12-18 07:23:12

问题


I want to make a ECDSA signature with this code :

AsymmetricKeyAlgorithmProvider objAsymmAlgProv = AsymmetricKeyAlgorithmProvider.OpenAlgorithm(AsymmetricAlgorithmNames.EcdsaSha256);
CryptographicKey keypair = objAsymmAlgProv.CreateKeyPairWithCurveName(EccCurveNames.SecP256r1);
BinaryStringEncoding encoding = BinaryStringEncoding.Utf8;
buffMsg = CryptographicBuffer.ConvertStringToBinary("Test Message", encoding);
IBuffer buffSIG = CryptographicEngine.Sign(keypair, buffMsg);
byte [] SignByteArray = buffSIG.ToArray();
bool res = CryptographicEngine.VerifySignature(keypair, buffMsg, buffSIG);

VerifySignature always returns true and this is ok.

But I have some problems with signature.

Why is length of signature ( SignByteArray) fixed? (0x40 byte).

And Why are SignByteArray [0] and SignByteArray [2] values incorrect? (I think they should be 0x30 and 0x02)

I expected a thing like https://kjur.github.io/jsrsasign/sample-ecdsa.html


回答1:


The ECDSA specification concludes with a determination that the pair (r, s) are the signature. What it neglects to do is indicate how one should write them down.

Windows and .NET use the IEEE (P)1363 format, which is big-endian r concat big-endian s. r and s have the same size (determined by the key size), so the signature is always even in length and r is the first half.

OpenSSL uses an ASN.1/DER encoding, which is SEQUENCE(INTEGER(r), INTEGER(s)). The DER encoding can go all the way down to 6 bytes (30 04 02 00 02 00, in the degenerate r=0, s=0) and is, on average, 6 bytes bigger than the IEEE form. It encodes as 30 [length, one or more bytes] 02 [length, one or more bytes] [optional padding 00] [big-endian r with no leading 00s] 02 [length, one or more bytes] [optional padding 00] [big-endian s with no leading 00s].

The DER form is too data dependant to specifically describe, so an example should help. Assuming we're using a curve in a 32-bit field and we generate (r=1016, s=2289644760).

IEEE 1363:

// r
00 00 03 F8
// s
88 79 34 D8

DER:

SEQUENCE(INTEGER(1016), INTEGER(2289644760))

// Encode r
// 1016 => 0x3F8 => 03 F8 (length 02)
SEQUENCE(
    02 02
       03 F8,
    INTEGER(2289644760))

// Encode s
// 2289644760 => 0x887934D8 => 88 79 34 D8
// But since the high bit is set this is a negative number (-2005322536),
// and s is defined to be positive.  So insert a 00 to ensure the high bit is clear.
//   => 00 88 79 34 D8 (length 05)
SEQUENCE(
    02 02
       03 F8
    02 05
       00 88 79 34 D8)

// And encode the sequence, whose payload length we can now count as 11 (0B)
30 0B
   02 02
      03 F8
   02 05
      00 88 79 34 D8

So Windows/.NET emit 00 00 03 F8 88 79 34 D8, and OpenSSL emits 30 0B 02 02 03 F8 02 05 00 88 79 34 D8. But they're both just saying (r, s) = (1016, 2289644760).

(Aside: Your observation that signature[2] == 0x02 in the DER encoding is correct for the size key you're working with, but at around a 496-bit key the SEQUENCE length becomes statistically likely to require more than one byte; so for a P-521 key it's most likely that it starts as 03 81 88 02, with variability in the 88 byte)



来源:https://stackoverflow.com/questions/41232601/uwp-ecdsp-signature

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!