summaryrefslogtreecommitdiff
path: root/crypto/testmgr.c
AgeCommit message (Collapse)Author
2024-04-12crypto: ecdsa - Register NIST P521 and extend test suiteStefan Berger
Register NIST P521 as an akcipher and extend the testmgr with NIST P521-specific test vectors. Add a module alias so the module gets automatically loaded by the crypto subsystem when the curve is needed. Tested-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-01-26crypto: testmgr - remove unused xts4096 and xts512 algorithms from testmgr.cJoachim Vandersmissen
Commit a93492cae30a ("crypto: ccree - remove data unit size support") removed support for the xts512 and xts4096 algorithms, but left them defined in testmgr.c. This patch removes those definitions. Signed-off-by: Joachim Vandersmissen <git@jvdsn.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15crypto: iaa - Add support for deflate-iaa compression algorithmTom Zanussi
This patch registers the deflate-iaa deflate compression algorithm and hooks it up to the IAA hardware using the 'fixed' compression mode introduced in the previous patch. Because the IAA hardware has a 4k history-window limitation, only buffers <= 4k, or that have been compressed using a <= 4k history window, are technically compliant with the deflate spec, which allows for a window of up to 32k. Because of this limitation, the IAA fixed mode deflate algorithm is given its own algorithm name, 'deflate-iaa'. With this change, the deflate-iaa crypto algorithm is registered and operational, and compression and decompression operations are fully enabled following the successful binding of the first IAA workqueue to the iaa_crypto sub-driver. when there are no IAA workqueues bound to the driver, the IAA crypto algorithm can be unregistered by removing the module. A new iaa_crypto 'verify_compress' driver attribute is also added, allowing the user to toggle compression verification. If set, each compress will be internally decompressed and the contents verified, returning error codes if unsuccessful. This can be toggled with 0/1: echo 0 > /sys/bus/dsa/drivers/crypto/verify_compress The default setting is '1' - verify all compresses. The verify_compress value setting at the time the algorithm is registered is captured in the algorithm's crypto_ctx and used for all compresses when using the algorithm. [ Based on work originally by George Powley, Jing Lin and Kyung Min Park ] Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-08crypto: testmgr - Remove cfb and ofbHerbert Xu
Remove test vectors for CFB/OFB. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-11-17crypto: drbg - Remove SHA1 from drbgDimitri John Ledkov
SP800-90C 3rd draft states that SHA-1 will be removed from all specifications, including drbg by end of 2030. Given kernels built today will be operating past that date, start complying with upcoming requirements. No functional change, as SHA-256 / SHA-512 based DRBG have always been the preferred ones. Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com> Reviewed-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-11-01crypto: testmgr - move pkcs1pad(rsa,sha3-*) to correct placeEric Biggers
alg_test_descs[] needs to be in sorted order, since it is used for binary search. This fixes the following boot-time warning: testmgr: alg_test_descs entries in wrong order: 'pkcs1pad(rsa,sha512)' before 'pkcs1pad(rsa,sha3-256)' Fixes: ee62afb9d02d ("crypto: rsa-pkcs1pad - Add FIPS 202 SHA-3 support") Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: rsa-pkcs1pad - Add FIPS 202 SHA-3 supportDimitri John Ledkov
Add support in rsa-pkcs1pad for FIPS 202 SHA-3 hashes, sizes 256 and up. As 224 is too weak for any practical purposes. Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: testmgr - stop checking crypto_ahash_alignmaskEric Biggers
Now that the alignmask for ahash and shash algorithms is always 0, crypto_ahash_alignmask() always returns 0 and will be removed. In preparation for this, stop checking crypto_ahash_alignmask() in testmgr. As a result of this change, test_sg_division::offset_relative_to_alignmask and testvec_config::key_offset_relative_to_alignmask no longer have any effect on ahash (or shash) algorithms. Therefore, also stop setting these flags in default_hash_testvec_configs[]. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-27crypto: testmgr - stop checking crypto_shash_alignmaskEric Biggers
Now that the shash algorithm type does not support nonzero alignmasks, crypto_shash_alignmask() always returns 0 and will be removed. In preparation for this, stop checking crypto_shash_alignmask() in testmgr. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-13crypto: arc4 - Convert from skcipher to lskcipherHerbert Xu
Replace skcipher implementation with lskcipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-10-12crypto: testmgr - Remove zlib-deflateHerbert Xu
Remove zlib-deflate test vectors as it no longer exists in the kernel. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
2023-09-20crypto: testmgr - Add support for lskcipher algorithmsHerbert Xu
Test lskcipher algorithms using the same logic as cipher algorithms. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-04-20crypto: testmgr - Add some test vectors for cmac(camellia)David Howells
Add some test vectors for 128-bit cmac(camellia) as found in draft-kato-ipsec-camellia-cmac96and128-01 section 6.2. The document also shows vectors for camellia-cmac-96, and for VK with a length greater than 16, but I'm not sure how to express those in testmgr. This also leaves cts(cbc(camellia)) untested, but I can't seem to find any tests for that that I could put into testmgr. Signed-off-by: David Howells <dhowells@redhat.com> cc: Herbert Xu <herbert@gondor.apana.org.au> cc: Chuck Lever <chuck.lever@oracle.com> cc: Scott Mayhew <smayhew@redhat.com> cc: linux-nfs@vger.kernel.org cc: linux-crypto@vger.kernel.org Link: https://datatracker.ietf.org/doc/pdf/draft-kato-ipsec-camellia-cmac96and128-01 Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14crypto: testmgr - fix RNG performance in fuzz testsEric Biggers
The performance of the crypto fuzz tests has greatly regressed since v5.18. When booting a kernel on an arm64 dev board with all software crypto algorithms and CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled, the fuzz tests now take about 200 seconds to run, or about 325 seconds with lockdep enabled, compared to about 5 seconds before. The root cause is that the random number generation has become much slower due to commit d4150779e60f ("random32: use real rng for non-deterministic randomness"). On my same arm64 dev board, at the time the fuzz tests are run, get_random_u8() is about 345x slower than prandom_u32_state(), or about 469x if lockdep is enabled. Lockdep makes a big difference, but much of the rest comes from the get_random_*() functions taking a *very* slow path when the CRNG is not yet initialized. Since the crypto self-tests run early during boot, even having a hardware RNG driver enabled (CONFIG_CRYPTO_DEV_QCOM_RNG in my case) doesn't prevent this. x86 systems don't have this issue, but they still see a significant regression if lockdep is enabled. Converting the "Fully random bytes" case in generate_random_bytes() to use get_random_bytes() helps significantly, improving the test time to about 27 seconds. But that's still over 5x slower than before. This is all a bit silly, though, since the fuzz tests don't actually need cryptographically secure random numbers. So let's just make them use a non-cryptographically-secure RNG as they did before. The original prandom_u32() is gone now, so let's use prandom_u32_state() instead, with an explicitly managed state, like various other self-tests in the kernel source tree (rbtree_test.c, test_scanf.c, etc.) already do. This also has the benefit that no locking is required anymore, so performance should be even better than the original version that used prandom_u32(). Fixes: d4150779e60f ("random32: use real rng for non-deterministic randomness") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-10crypto: testmgr - add diff-splits of src/dst into default cipher configZhang Yiqun
This type of request is often happened in AF_ALG cases. So add this vector in default cipher config array. Signed-off-by: Zhang Yiqun <zhangyiqun@phytium.com.cn> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-01-27crypto: testmgr - disallow certain DRBG hash functions in FIPS modeVladis Dronov
According to FIPS 140-3 IG, section D.R "Hash Functions Acceptable for Use in the SP 800-90A DRBGs", modules certified after May 16th, 2023 must not support the use of: SHA-224, SHA-384, SHA512-224, SHA512-256, SHA3-224, SHA3-384. Disallow HMAC and HASH DRBGs using SHA-384 in FIPS mode. Signed-off-by: Vladis Dronov <vdronov@redhat.com> Reviewed-by: Stephan Müller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-01-06crypto: testmgr - allow ecdsa-nist-p256 and -p384 in FIPS modeNicolai Stange
The kernel provides implementations of the NIST ECDSA signature verification primitives. For key sizes of 256 and 384 bits respectively they are approved and can be enabled in FIPS mode. Do so. Signed-off-by: Nicolai Stange <nstange@suse.de> Signed-off-by: Vladis Dronov <vdronov@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-01-06crypto: testmgr - disallow plain ghash in FIPS modeNicolai Stange
ghash may be used only as part of the gcm(aes) construction in FIPS mode. Since commit d6097b8d5d55 ("crypto: api - allow algs only in specific constructions in FIPS mode") there's support for using spawns which by itself are marked as non-approved from approved template instantiations. So simply mark plain ghash as non-approved in testmgr to block any attempts of direct instantiations in FIPS mode. Signed-off-by: Nicolai Stange <nstange@suse.de> Signed-off-by: Vladis Dronov <vdronov@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-01-06crypto: testmgr - disallow plain cbcmac(aes) in FIPS modeNicolai Stange
cbcmac(aes) may be used only as part of the ccm(aes) construction in FIPS mode. Since commit d6097b8d5d55 ("crypto: api - allow algs only in specific constructions in FIPS mode") there's support for using spawns which by itself are marked as non-approved from approved template instantiations. So simply mark plain cbcmac(aes) as non-approved in testmgr to block any attempts of direct instantiations in FIPS mode. Signed-off-by: Nicolai Stange <nstange@suse.de> Signed-off-by: Vladis Dronov <vdronov@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-14Merge tag 'v6.2-p1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "API: - Optimise away self-test overhead when they are disabled - Support symmetric encryption via keyring keys in af_alg - Flip hwrng default_quality, the default is now maximum entropy Algorithms: - Add library version of aesgcm - CFI fixes for assembly code - Add arm/arm64 accelerated versions of sm3/sm4 Drivers: - Remove assumption on arm64 that kmalloc is DMA-aligned - Fix selftest failures in rockchip - Add support for RK3328/RK3399 in rockchip - Add deflate support in qat - Merge ux500 into stm32 - Add support for TEE for PCI ID 0x14CA in ccp - Add mt7986 support in mtk - Add MaxLinear platform support in inside-secure - Add NPCM8XX support in npcm" * tag 'v6.2-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (184 commits) crypto: ux500/cryp - delete driver crypto: stm32/cryp - enable for use with Ux500 crypto: stm32 - enable drivers to be used on Ux500 dt-bindings: crypto: Let STM32 define Ux500 CRYP hwrng: geode - Fix PCI device refcount leak hwrng: amd - Fix PCI device refcount leak crypto: qce - Set DMA alignment explicitly crypto: octeontx2 - Set DMA alignment explicitly crypto: octeontx - Set DMA alignment explicitly crypto: keembay - Set DMA alignment explicitly crypto: safexcel - Set DMA alignment explicitly crypto: hisilicon/hpre - Set DMA alignment explicitly crypto: chelsio - Set DMA alignment explicitly crypto: ccree - Set DMA alignment explicitly crypto: ccp - Set DMA alignment explicitly crypto: cavium - Set DMA alignment explicitly crypto: img-hash - Fix variable dereferenced before check 'hdev->req' crypto: arm64/ghash-ce - use frame_push/pop macros consistently crypto: arm64/crct10dif - use frame_push/pop macros consistently crypto: arm64/aes-modes - use frame_push/pop macros consistently ...
2022-12-12Merge tag 'pull-iov_iter' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull iov_iter updates from Al Viro: "iov_iter work; most of that is about getting rid of direction misannotations and (hopefully) preventing more of the same for the future" * tag 'pull-iov_iter' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: use less confusing names for iov_iter direction initializers iov_iter: saner checks for attempt to copy to/from iterator [xen] fix "direction" argument of iov_iter_kvec() [vhost] fix 'direction' argument of iov_iter_{init,bvec}() [target] fix iov_iter_bvec() "direction" argument [s390] memcpy_real(): WRITE is "data source", not destination... [s390] zcore: WRITE is "data source", not destination... [infiniband] READ is "data destination", not source... [fsi] WRITE is "data source", not destination... [s390] copy_oldmem_kernel() - WRITE is "data source", not destination csum_and_copy_to_iter(): handle ITER_DISCARD get rid of unlikely() on page_copy_sane() calls
2022-11-25use less confusing names for iov_iter direction initializersAl Viro
READ/WRITE proved to be actively confusing - the meanings are "data destination, as used with read(2)" and "data source, as used with write(2)", but people keep interpreting those as "we read data from it" and "we write data to it", i.e. exactly the wrong way. Call them ITER_DEST and ITER_SOURCE - at least that is harder to misinterpret... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2022-11-18treewide: use get_random_u32_inclusive() when possibleJason A. Donenfeld
These cases were done with this Coccinelle: @@ expression H; expression L; @@ - (get_random_u32_below(H) + L) + get_random_u32_inclusive(L, H + L - 1) @@ expression H; expression L; expression E; @@ get_random_u32_inclusive(L, H - + E - - E ) @@ expression H; expression L; expression E; @@ get_random_u32_inclusive(L, H - - E - + E ) @@ expression H; expression L; expression E; expression F; @@ get_random_u32_inclusive(L, H - - E + F - + E ) @@ expression H; expression L; expression E; expression F; @@ get_random_u32_inclusive(L, H - + E + F - - E ) And then subsequently cleaned up by hand, with several automatic cases rejected if it didn't make sense contextually. Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> # for infiniband Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-11-18treewide: use get_random_u32_below() instead of deprecated functionJason A. Donenfeld
This is a simple mechanical transformation done by: @@ expression E; @@ - prandom_u32_max + get_random_u32_below (E) Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Acked-by: Darrick J. Wong <djwong@kernel.org> # for xfs Reviewed-by: SeongJae Park <sj@kernel.org> # for damon Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> # for infiniband Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> # for arm Acked-by: Ulf Hansson <ulf.hansson@linaro.org> # for mmc Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-11-04crypto: testmgr - add SM4 cts-cbc/xts/xcbc test vectorsTianjia Zhang
This patch newly adds the test vectors of CTS-CBC/XTS/XCBC modes of the SM4 algorithm, and also added some test vectors for SM4 GCM/CCM. Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-10-11treewide: use get_random_{u8,u16}() when possible, part 1Jason A. Donenfeld
Rather than truncate a 32-bit value to a 16-bit value or an 8-bit value, simply use the get_random_{u8,u16}() functions, which are faster than wasting the additional bytes from a 32-bit value. This was done mechanically with this coccinelle script: @@ expression E; identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; typedef u16; typedef __be16; typedef __le16; typedef u8; @@ ( - (get_random_u32() & 0xffff) + get_random_u16() | - (get_random_u32() & 0xff) + get_random_u8() | - (get_random_u32() % 65536) + get_random_u16() | - (get_random_u32() % 256) + get_random_u8() | - (get_random_u32() >> 16) + get_random_u16() | - (get_random_u32() >> 24) + get_random_u8() | - (u16)get_random_u32() + get_random_u16() | - (u8)get_random_u32() + get_random_u8() | - (__be16)get_random_u32() + (__be16)get_random_u16() | - (__le16)get_random_u32() + (__le16)get_random_u16() | - prandom_u32_max(65536) + get_random_u16() | - prandom_u32_max(256) + get_random_u8() | - E->inet_id = get_random_u32() + E->inet_id = get_random_u16() ) @@ identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; typedef u16; identifier v; @@ - u16 v = get_random_u32(); + u16 v = get_random_u16(); @@ identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; typedef u8; identifier v; @@ - u8 v = get_random_u32(); + u8 v = get_random_u8(); @@ identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; typedef u16; u16 v; @@ - v = get_random_u32(); + v = get_random_u16(); @@ identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; typedef u8; u8 v; @@ - v = get_random_u32(); + v = get_random_u8(); // Find a potential literal @literal_mask@ expression LITERAL; type T; identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; position p; @@ ((T)get_random_u32()@p & (LITERAL)) // Examine limits @script:python add_one@ literal << literal_mask.LITERAL; RESULT; @@ value = None if literal.startswith('0x'): value = int(literal, 16) elif literal[0] in '123456789': value = int(literal, 10) if value is None: print("I don't know how to handle %s" % (literal)) cocci.include_match(False) elif value < 256: coccinelle.RESULT = cocci.make_ident("get_random_u8") elif value < 65536: coccinelle.RESULT = cocci.make_ident("get_random_u16") else: print("Skipping large mask of %s" % (literal)) cocci.include_match(False) // Replace the literal mask with the calculated result. @plus_one@ expression literal_mask.LITERAL; position literal_mask.p; identifier add_one.RESULT; identifier FUNC; @@ - (FUNC()@p & (LITERAL)) + (RESULT() & LITERAL) Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Yury Norov <yury.norov@gmail.com> Acked-by: Jakub Kicinski <kuba@kernel.org> Acked-by: Toke Høiland-Jørgensen <toke@toke.dk> # for sch_cake Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-10-11treewide: use prandom_u32_max() when possible, part 1Jason A. Donenfeld
Rather than incurring a division or requesting too many random bytes for the given range, use the prandom_u32_max() function, which only takes the minimum required bytes from the RNG and avoids divisions. This was done mechanically with this coccinelle script: @basic@ expression E; type T; identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; typedef u64; @@ ( - ((T)get_random_u32() % (E)) + prandom_u32_max(E) | - ((T)get_random_u32() & ((E) - 1)) + prandom_u32_max(E * XXX_MAKE_SURE_E_IS_POW2) | - ((u64)(E) * get_random_u32() >> 32) + prandom_u32_max(E) | - ((T)get_random_u32() & ~PAGE_MASK) + prandom_u32_max(PAGE_SIZE) ) @multi_line@ identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; identifier RAND; expression E; @@ - RAND = get_random_u32(); ... when != RAND - RAND %= (E); + RAND = prandom_u32_max(E); // Find a potential literal @literal_mask@ expression LITERAL; type T; identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32"; position p; @@ ((T)get_random_u32()@p & (LITERAL)) // Add one to the literal. @script:python add_one@ literal << literal_mask.LITERAL; RESULT; @@ value = None if literal.startswith('0x'): value = int(literal, 16) elif literal[0] in '123456789': value = int(literal, 10) if value is None: print("I don't know how to handle %s" % (literal)) cocci.include_match(False) elif value == 2**32 - 1 or value == 2**31 - 1 or value == 2**24 - 1 or value == 2**16 - 1 or value == 2**8 - 1: print("Skipping 0x%x for cleanup elsewhere" % (value)) cocci.include_match(False) elif value & (value + 1) != 0: print("Skipping 0x%x because it's not a power of two minus one" % (value)) cocci.include_match(False) elif literal.startswith('0x'): coccinelle.RESULT = cocci.make_expr("0x%x" % (value + 1)) else: coccinelle.RESULT = cocci.make_expr("%d" % (value + 1)) // Replace the literal mask with the calculated result. @plus_one@ expression literal_mask.LITERAL; position literal_mask.p; expression add_one.RESULT; identifier FUNC; @@ - (FUNC()@p & (LITERAL)) + prandom_u32_max(RESULT) @collapse_ret@ type T; identifier VAR; expression E; @@ { - T VAR; - VAR = (E); - return VAR; + return E; } @drop_var@ type T; identifier VAR; @@ { - T VAR; ... when != VAR } Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Yury Norov <yury.norov@gmail.com> Reviewed-by: KP Singh <kpsingh@kernel.org> Reviewed-by: Jan Kara <jack@suse.cz> # for ext4 and sbitmap Reviewed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com> # for drbd Acked-by: Jakub Kicinski <kuba@kernel.org> Acked-by: Heiko Carstens <hca@linux.ibm.com> # for s390 Acked-by: Ulf Hansson <ulf.hansson@linaro.org> # for mmc Acked-by: Darrick J. Wong <djwong@kernel.org> # for xfs Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-09-02crypto: testmgr - fix indentation for test_acomp() argsLucas Segarra Fernandez
Set right indentation for test_acomp(). Signed-off-by: Lucas Segarra Fernandez <lucas.segarra.fernandez@intel.com> Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-08-19crypto: testmgr - don't generate WARN for missing modulesRobert Elliott
This userspace command: modprobe tcrypt or modprobe tcrypt mode=0 runs all the tcrypt test cases numbered <200 (i.e., all the test cases calling tcrypt_test() and returning return values). Tests are sparsely numbered from 0 to 1000. For example: modprobe tcrypt mode=12 tests sha512, and modprobe tcrypt mode=152 tests rfc4543(gcm(aes))) - AES-GCM as GMAC The test manager generates WARNING crashdumps every time it attempts a test using an algorithm that is not available (not built-in to the kernel or available as a module): alg: skcipher: failed to allocate transform for ecb(arc4): -2 ------------[ cut here ]----------- alg: self-tests for ecb(arc4) (ecb(arc4)) failed (rc=-2) WARNING: CPU: 9 PID: 4618 at crypto/testmgr.c:5777 alg_test+0x30b/0x510 [50 more lines....] ---[ end trace 0000000000000000 ]--- If the kernel is compiled with CRYPTO_USER_API_ENABLE_OBSOLETE disabled (the default), then these algorithms are not compiled into the kernel or made into modules and trigger WARNINGs: arc4 tea xtea khazad anubis xeta seed Additionally, any other algorithms that are not enabled in .config will generate WARNINGs. In RHEL 9.0, for example, the default selection of algorithms leads to 16 WARNING dumps. One attempt to fix this was by modifying tcrypt_test() to check crypto_has_alg() and immediately return 0 if crypto_has_alg() fails, rather than proceed and return a non-zero error value that causes the caller (alg_test() in crypto/testmgr.c) to invoke WARN(). That knocks out too many algorithms, though; some combinations like ctr(des3_ede) would work. Instead, change the condition on the WARN to ignore a return value is ENOENT, which is the value returned when the algorithm or combination of algorithms doesn't exist. Add a pr_warn to communicate that information in case the WARN is skipped. This approach allows algorithm tests to work that are combinations, not provided by one driver, like ctr(blowfish). Result - no more WARNINGs: modprobe tcrypt [ 115.541765] tcrypt: testing md5 [ 115.556415] tcrypt: testing sha1 [ 115.570463] tcrypt: testing ecb(des) [ 115.585303] cryptomgr: alg: skcipher: failed to allocate transform for ecb(des): -2 [ 115.593037] cryptomgr: alg: self-tests for ecb(des) using ecb(des) failed (rc=-2) [ 115.593038] tcrypt: testing cbc(des) [ 115.610641] cryptomgr: alg: skcipher: failed to allocate transform for cbc(des): -2 [ 115.618359] cryptomgr: alg: self-tests for cbc(des) using cbc(des) failed (rc=-2) ... Signed-off-by: Robert Elliott <elliott@hpe.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-08-19crypto: testmgr - extend acomp tests for NULL destination bufferLucas Segarra Fernandez
Acomp API supports NULL destination buffer for compression and decompression requests. In such cases allocation is performed by API. Add test cases for crypto_acomp_compress() and crypto_acomp_decompress() with dst buffer allocated by API. Tests will only run if CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y. Signed-off-by: Lucas Segarra Fernandez <lucas.segarra.fernandez@intel.com> Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-07-15crypto: testmgr - add ARIA testmgr testsTaehee Yoo
It contains ARIA ecb(aria), cbc(aria), cfb(aria), ctr(aria), and gcm(aria). ecb testvector is from RFC standard. cbc, cfb, and ctr testvectors are from KISA[1], who developed ARIA algorithm. gcm(aria) is from openssl test vector. [1] https://seed.kisa.or.kr/kisa/kcmvp/EgovVerification.do (Korean) Signed-off-by: Taehee Yoo <ap420073@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-06-10crypto: blake2s - remove shash moduleJason A. Donenfeld
BLAKE2s has no currently known use as an shash. Just remove all of this unnecessary plumbing. Removing this shash was something we talked about back when we were making BLAKE2s a built-in, but I simply never got around to doing it. So this completes that project. Importantly, this fixs a bug in which the lib code depends on crypto_simd_disabled_for_test, causing linker errors. Also add more alignment tests to the selftests and compare SIMD and non-SIMD compression functions, to make up for what we lose from testmgr.c. Reported-by: gaochao <gaochao49@huawei.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: stable@vger.kernel.org Fixes: 6048fdcc5f26 ("lib/crypto: blake2s: include as built-in") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-06-10crypto: hctr2 - Add HCTR2 supportNathan Huckleberry
Add support for HCTR2 as a template. HCTR2 is a length-preserving encryption mode that is efficient on processors with instructions to accelerate AES and carryless multiplication, e.g. x86 processors with AES-NI and CLMUL, and ARM processors with the ARMv8 Crypto Extensions. As a length-preserving encryption mode, HCTR2 is suitable for applications such as storage encryption where ciphertext expansion is not possible, and thus authenticated encryption cannot be used. Currently, such applications usually use XTS, or in some cases Adiantum. XTS has the disadvantage that it is a narrow-block mode: a bitflip will only change 16 bytes in the resulting ciphertext or plaintext. This reveals more information to an attacker than necessary. HCTR2 is a wide-block mode, so it provides a stronger security property: a bitflip will change the entire message. HCTR2 is somewhat similar to Adiantum, which is also a wide-block mode. However, HCTR2 is designed to take advantage of existing crypto instructions, while Adiantum targets devices without such hardware support. Adiantum is also designed with longer messages in mind, while HCTR2 is designed to be efficient even on short messages. HCTR2 requires POLYVAL and XCTR as components. More information on HCTR2 can be found here: "Length-preserving encryption with HCTR2": https://eprint.iacr.org/2021/1441.pdf Signed-off-by: Nathan Huckleberry <nhuck@google.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-06-10crypto: polyval - Add POLYVAL supportNathan Huckleberry
Add support for POLYVAL, an ε-Δ-universal hash function similar to GHASH. This patch only uses POLYVAL as a component to implement HCTR2 mode. It should be noted that POLYVAL was originally specified for use in AES-GCM-SIV (RFC 8452), but the kernel does not currently support this mode. POLYVAL is implemented as an shash algorithm. The implementation is modified from ghash-generic.c. For more information on POLYVAL see: Length-preserving encryption with HCTR2: https://eprint.iacr.org/2021/1441.pdf AES-GCM-SIV: Nonce Misuse-Resistant Authenticated Encryption: https://datatracker.ietf.org/doc/html/rfc8452 Signed-off-by: Nathan Huckleberry <nhuck@google.com> Reviewed-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-06-10crypto: xctr - Add XCTR supportNathan Huckleberry
Add a generic implementation of XCTR mode as a template. XCTR is a blockcipher mode similar to CTR mode. XCTR uses XORs and little-endian addition rather than big-endian arithmetic which has two advantages: It is slightly faster on little-endian CPUs and it is less likely to be implemented incorrect since integer overflows are not possible on practical input sizes. XCTR is used as a component to implement HCTR2. More information on XCTR mode can be found in the HCTR2 paper: https://eprint.iacr.org/2021/1441.pdf Signed-off-by: Nathan Huckleberry <nhuck@google.com> Reviewed-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-04-08crypto: testmgr - test in-place en/decryption with two sglistsEric Biggers
As was established in the thread https://lore.kernel.org/linux-crypto/20220223080400.139367-1-gilad@benyossef.com/T/#u, many crypto API users doing in-place en/decryption don't use the same scatterlist pointers for the source and destination, but rather use separate scatterlists that point to the same memory. This case isn't tested by the self-tests, resulting in bugs. This is the natural usage of the crypto API in some cases, so requiring API users to avoid this usage is not reasonable. Therefore, update the self-tests to start testing this case. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-03-26Merge tag 'for-5.18/64bit-pi-2022-03-25' of git://git.kernel.dk/linux-blockLinus Torvalds
Pull block layer 64-bit data integrity support from Jens Axboe: "This adds support for 64-bit data integrity in the block layer and in NVMe" * tag 'for-5.18/64bit-pi-2022-03-25' of git://git.kernel.dk/linux-block: crypto: fix crc64 testmgr digest byte order nvme: add support for enhanced metadata block: add pi for extended integrity crypto: add rocksoft 64b crc guard tag framework lib: add rocksoft model crc64 linux/kernel: introduce lower_48_bits function asm-generic: introduce be48 unaligned accessors nvme: allow integrity on extended metadata formats block: support pi with extended metadata
2022-03-07crypto: add rocksoft 64b crc guard tag frameworkKeith Busch
Hardware specific features may be able to calculate a crc64, so provide a framework for drivers to register their implementation. If nothing is registered, fallback to the generic table lookup implementation. The implementation is modeled after the crct10dif equivalent. Signed-off-by: Keith Busch <kbusch@kernel.org> Link: https://lore.kernel.org/r/20220303201312.3255347-7-kbusch@kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-03crypto: dh - disallow plain "dh" usage in FIPS modeNicolai Stange
SP800-56Arev3, sec. 5.5.2 ("Assurance of Domain-Parameter Validity") asserts that an implementation needs to verify domain paramtere validity, which boils down to either - the domain parameters corresponding to some known safe-prime group explicitly listed to be approved in the document or - for parameters conforming to a "FIPS 186-type parameter-size set", that the implementation needs to perform an explicit domain parameter verification, which would require access to the "seed" and "counter" values used in their generation. The latter is not easily feasible and moreover, SP800-56Arev3 states that safe-prime groups are preferred and that FIPS 186-type parameter sets should only be supported for backward compatibility, if it all. Mark "dh" as not fips_allowed in testmgr. Note that the safe-prime ffdheXYZ(dh) wrappers are not affected by this change: as these enforce some approved safe-prime group each, their usage is still allowed in FIPS mode. This change will effectively render the keyctl(KEYCTL_DH_COMPUTE) syscall unusable in FIPS mode, but it has been brought up that this might even be a good thing ([1]). [1] https://lore.kernel.org/r/20211217055227.GA20698@gondor.apana.org.au Signed-off-by: Nicolai Stange <nstange@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-03-03crypto: api - allow algs only in specific constructions in FIPS modeNicolai Stange
Currently we do not distinguish between algorithms that fail on the self-test vs. those which are disabled in FIPS mode (not allowed). Both are marked as having failed the self-test. Recently the need arose to allow the usage of certain algorithms only as arguments to specific template instantiations in FIPS mode. For example, standalone "dh" must be blocked, but e.g. "ffdhe2048(dh)" is allowed. Other potential use cases include "cbcmac(aes)", which must only be used with ccm(), or "ghash", which must be used only for gcm(). This patch allows this scenario by adding a new flag FIPS_INTERNAL to indicate those algorithms that are not FIPS-allowed. They can then be used as template arguments only, i.e. when looked up via crypto_grab_spawn() to be more specific. The FIPS_INTERNAL bit gets propagated upwards recursively into the surrounding template instances, until the construction eventually matches an explicit testmgr entry with ->fips_allowed being set, if any. The behaviour to skip !->fips_allowed self-test executions in FIPS mode will be retained. Note that this effectively means that FIPS_INTERNAL algorithms are handled very similarly to the INTERNAL ones in this regard. It is expected that the FIPS_INTERNAL algorithms will receive sufficient testing when the larger constructions they're a part of, if any, get exercised by testmgr. Note that as a side-effect of this patch algorithms which are not FIPS-allowed will now return ENOENT instead of ELIBBAD. Hopefully this is not an issue as some people were relying on this already. Link: https://lore.kernel.org/r/YeEVSaMEVJb3cQkq@gondor.apana.org.au Originally-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Nicolai Stange <nstange@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-03-03crypto: testmgr - add known answer tests for ffdheXYZ(dh) templatesNicolai Stange
Add known answer tests for the ffdhe2048(dh), ffdhe3072(dh), ffdhe4096(dh), ffdhe6144(dh) and ffdhe8192(dh) templates introduced with the previous patch to the testmgr. All TVs have been generated with OpenSSL. Signed-off-by: Nicolai Stange <nstange@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-02-11crypto: hmac - add fips_skip supportStephan Müller
By adding the support for the flag fips_skip, hash / HMAC test vectors may be marked to be not applicable in FIPS mode. Such vectors are silently skipped in FIPS mode. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-01-31crypto: testmgr - Move crypto_simd_disabled_for_test outHerbert Xu
As testmgr is part of cryptomgr which was designed to be unloadable as a module, it shouldn't export any symbols for other crypto modules to use as that would prevent it from being unloaded. All its functionality is meant to be accessed through notifiers. The symbol crypto_simd_disabled_for_test was added to testmgr which caused it to be pinned as a module if its users were also loaded. This patch moves it out of testmgr and into crypto/algapi.c so cryptomgr can again be unloaded and replaced on demand. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-11-26crypto: des - disallow des3 in FIPS modeStephan Müller
On Dec 31 2023 NIST sunsets TDES for FIPS use. To prevent FIPS validations to be completed in the future to be affected by the TDES sunsetting, disallow TDES already now. Otherwise a FIPS validation would need to be "touched again" end 2023 to handle TDES accordingly. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-10-08crypto: testmgr - Only disable migration in crypto_disable_simd_for_test()Sebastian Andrzej Siewior
crypto_disable_simd_for_test() disables preemption in order to receive a stable per-CPU variable which it needs to modify in order to alter crypto_simd_usable() results. This can also be achived by migrate_disable() which forbidds CPU migrations but allows the task to be preempted. The latter is important for PREEMPT_RT since operation like skcipher_walk_first() may allocate memory which must not happen with disabled preemption on PREEMPT_RT. Use migrate_disable() in crypto_disable_simd_for_test() to achieve a stable per-CPU pointer. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-08-21crypto: testmgr - Add GCM/CCM mode test of SM4 algorithmTianjia Zhang
The GCM/CCM mode of the SM4 algorithm is defined in the rfc 8998 specification, and the test case data also comes from rfc 8998. Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-06-28crypto: drbg - self test for HMAC(SHA-512)Stephan Müller
Considering that the HMAC(SHA-512) DRBG is the default DRBG now, a self test is to be provided. The test vector is obtained from a successful NIST ACVP test run. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-05-28crypto: ecdh - add test suite for NIST P384Hui Tang
Add test vector params for NIST P384, add test vector for NIST P384 on vector of tests. Vector param from: https://datatracker.ietf.org/doc/html/rfc5903#section-3.1 Signed-off-by: Hui Tang <tanghui20@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-05-28crypto: ecdh - fix ecdh-nist-p192's entry in testmgrHui Tang
Add a comment that p192 will fail to register in FIPS mode. Fix ecdh-nist-p192's entry in testmgr by removing the ifdefs and not setting fips_allowed. Signed-off-by: Hui Tang <tanghui20@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-03-26Merge branch 'ecc'Herbert Xu
This pulls in the NIST P384/256/192 x509 changes.