summaryrefslogtreecommitdiff
path: root/include/crypto
AgeCommit message (Collapse)Author
2022-02-04lib/crypto: blake2s: avoid indirect calls to compression function for Clang CFIJason A. Donenfeld
blake2s_compress_generic is weakly aliased by blake2s_compress. The current harness for function selection uses a function pointer, which is ordinarily inlined and resolved at compile time. But when Clang's CFI is enabled, CFI still triggers when making an indirect call via a weak symbol. This seems like a bug in Clang's CFI, as though it's bucketing weak symbols and strong symbols differently. It also only seems to trigger when "full LTO" mode is used, rather than "thin LTO". [ 0.000000][ T0] Kernel panic - not syncing: CFI failure (target: blake2s_compress_generic+0x0/0x1444) [ 0.000000][ T0] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.16.0-mainline-06981-g076c855b846e #1 [ 0.000000][ T0] Hardware name: MT6873 (DT) [ 0.000000][ T0] Call trace: [ 0.000000][ T0] dump_backtrace+0xfc/0x1dc [ 0.000000][ T0] dump_stack_lvl+0xa8/0x11c [ 0.000000][ T0] panic+0x194/0x464 [ 0.000000][ T0] __cfi_check_fail+0x54/0x58 [ 0.000000][ T0] __cfi_slowpath_diag+0x354/0x4b0 [ 0.000000][ T0] blake2s_update+0x14c/0x178 [ 0.000000][ T0] _extract_entropy+0xf4/0x29c [ 0.000000][ T0] crng_initialize_primary+0x24/0x94 [ 0.000000][ T0] rand_initialize+0x2c/0x6c [ 0.000000][ T0] start_kernel+0x2f8/0x65c [ 0.000000][ T0] __primary_switched+0xc4/0x7be4 [ 0.000000][ T0] Rebooting in 5 seconds.. Nonetheless, the function pointer method isn't so terrific anyway, so this patch replaces it with a simple boolean, which also gets inlined away. This successfully works around the Clang bug. In general, I'm not too keen on all of the indirection involved here; it clearly does more harm than good. Hopefully the whole thing can get cleaned up down the road when lib/crypto is overhauled more comprehensively. But for now, we go with a simple bandaid. Fixes: 6048fdcc5f26 ("lib/crypto: blake2s: include as built-in") Link: https://github.com/ClangBuiltLinux/linux/issues/1567 Reported-by: Miles Chen <miles.chen@mediatek.com> Tested-by: Miles Chen <miles.chen@mediatek.com> Tested-by: Nathan Chancellor <nathan@kernel.org> Tested-by: John Stultz <john.stultz@linaro.org> Acked-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-18lib/crypto: blake2s: move hmac construction into wireguardJason A. Donenfeld
Basically nobody should use blake2s in an HMAC construction; it already has a keyed variant. But unfortunately for historical reasons, Noise, used by WireGuard, uses HKDF quite strictly, which means we have to use this. Because this really shouldn't be used by others, this commit moves it into wireguard's noise.c locally, so that kernels that aren't using WireGuard don't get this superfluous code baked in. On m68k systems, this shaves off ~314 bytes. Cc: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-11Merge tag 'tpmdd-next-v5.17-fixed' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd Pull TPM updates from Jarkko Sakkinen: "Other than bug fixes for TPM, this includes a patch for asymmetric keys to allow to look up and verify with self-signed certificates (keys without so called AKID - Authority Key Identifier) using a new "dn:" prefix in the query" * tag 'tpmdd-next-v5.17-fixed' of git://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd: lib: remove redundant assignment to variable ret tpm: fix NPE on probe for missing device tpm: fix potential NULL pointer access in tpm_del_char_device tpm: Add Upgrade/Reduced mode support for TPM2 modules char: tpm: cr50: Set TPM_FIRMWARE_POWER_MANAGED based on device property keys: X.509 public key issuer lookup without AKID tpm_tis: Fix an error handling path in 'tpm_tis_core_init()' tpm: tpm_tis_spi_cr50: Add default RNG quality tpm/st33zp24: drop unneeded over-commenting tpm: add request_locality before write TPM_INT_ENABLE
2022-01-11Merge branch 'linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "Algorithms: - Drop alignment requirement for data in aesni - Use synchronous seeding from the /dev/random in DRBG - Reseed nopr DRBGs every 5 minutes from /dev/random - Add KDF algorithms currently used by security/DH - Fix lack of entropy on some AMD CPUs with jitter RNG Drivers: - Add support for the D1 variant in sun8i-ce - Add SEV_INIT_EX support in ccp - PFVF support for GEN4 host driver in qat - Compression support for GEN4 devices in qat - Add cn10k random number generator support" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (145 commits) crypto: af_alg - rewrite NULL pointer check lib/mpi: Add the return value check of kcalloc() crypto: qat - fix definition of ring reset results crypto: hisilicon - cleanup warning in qm_get_qos_value() crypto: kdf - select SHA-256 required for self-test crypto: x86/aesni - don't require alignment of data crypto: ccp - remove unneeded semicolon crypto: stm32/crc32 - Fix kernel BUG triggered in probe() crypto: s390/sha512 - Use macros instead of direct IV numbers crypto: sparc/sha - remove duplicate hash init function crypto: powerpc/sha - remove duplicate hash init function crypto: mips/sha - remove duplicate hash init function crypto: sha256 - remove duplicate generic hash init function crypto: jitter - add oversampling of noise source MAINTAINERS: update SEC2 driver maintainers list crypto: ux500 - Use platform_get_irq() to get the interrupt crypto: hisilicon/qm - disable qm clock-gating crypto: omap-aes - Fix broken pm_runtime_and_get() usage MAINTAINERS: update caam crypto driver maintainers list crypto: octeontx2 - prevent underflow in get_cores_bmap() ...
2022-01-09keys: X.509 public key issuer lookup without AKIDAndrew Zaborowski
There are non-root X.509 v3 certificates in use out there that contain no Authority Key Identifier extension (RFC5280 section 4.2.1.1). For trust verification purposes the kernel asymmetric key type keeps two struct asymmetric_key_id instances that the key can be looked up by, and another two to look up the key's issuer. The x509 public key type and the PKCS7 type generate them from the SKID and AKID extensions in the certificate. In effect current code has no way to look up the issuer certificate for verification without the AKID. To remedy this, add a third asymmetric_key_id blob to the arrays in both asymmetric_key_id's (for certficate subject) and in the public_keys_signature's auth_ids (for issuer lookup), using just raw subject and issuer DNs from the certificate. Adapt asymmetric_key_ids() and its callers to use the third ID for lookups when none of the other two are available. Attempt to keep the logic intact when they are, to minimise behaviour changes. Adapt the restrict functions' NULL-checks to include that ID too. Do not modify the lookup logic in pkcs7_verify.c, the AKID extensions are still required there. Internally use a new "dn:" prefix to the search specifier string generated for the key lookup in find_asymmetric_key(). This tells asymmetric_key_match_preparse to only match the data against the raw DN in the third ID and shouldn't conflict with search specifiers already in use. In effect implement what (2) in the struct asymmetric_key_id comment (include/keys/asymmetric-type.h) is probably talking about already, so do not modify that comment. It is also how "openssl verify" looks up issuer certificates without the AKID available. Lookups by the raw DN are unambiguous only provided that the CAs respect the condition in RFC5280 4.2.1.1 that the AKID may only be omitted if the CA uses a single signing key. The following is an example of two things that this change enables. A self-signed ceritficate is generated following the example from https://letsencrypt.org/docs/certificates-for-localhost/, and can be looked up by an identifier and verified against itself by linking to a restricted keyring -- both things not possible before due to the missing AKID extension: $ openssl req -x509 -out localhost.crt -outform DER -keyout localhost.key \ -newkey rsa:2048 -nodes -sha256 \ -subj '/CN=localhost' -extensions EXT -config <( \ echo -e "[dn]\nCN=localhost\n[req]\ndistinguished_name = dn\n[EXT]\n" \ "subjectAltName=DNS:localhost\nkeyUsage=digitalSignature\n" \ "extendedKeyUsage=serverAuth") $ keyring=`keyctl newring test @u` $ trusted=`keyctl padd asymmetric trusted $keyring < localhost.crt`; \ echo $trusted 39726322 $ keyctl search $keyring asymmetric dn:3112301006035504030c096c6f63616c686f7374 39726322 $ keyctl restrict_keyring $keyring asymmetric key_or_keyring:$trusted $ keyctl padd asymmetric verified $keyring < localhost.crt Signed-off-by: Andrew Zaborowski <andrew.zaborowski@intel.com> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Acked-by: Jarkko Sakkinen <jarkko@kernel.org> Acked-by: David Howells <dhowells@redhat.com> Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
2022-01-07random: early initialization of ChaCha constantsDominik Brodowski
Previously, the ChaCha constants for the primary pool were only initialized in crng_initialize_primary(), called by rand_initialize(). However, some randomness is actually extracted from the primary pool beforehand, e.g. by kmem_cache_create(). Therefore, statically initialize the ChaCha constants for the primary pool. Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: <linux-crypto@vger.kernel.org> Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07lib/crypto: blake2s: include as built-inJason A. Donenfeld
In preparation for using blake2s in the RNG, we change the way that it is wired-in to the build system. Instead of using ifdefs to select the right symbol, we use weak symbols. And because ARM doesn't need the generic implementation, we make the generic one default only if an arch library doesn't need it already, and then have arch libraries that do need it opt-in. So that the arch libraries can remain tristate rather than bool, we then split the shash part from the glue code. Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Masahiro Yamada <masahiroy@kernel.org> Cc: linux-kbuild@vger.kernel.org Cc: linux-crypto@vger.kernel.org Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-12-17crypto: api - Replace kernel.h with the necessary inclusionsAndy Shevchenko
When kernel.h is used in the headers it adds a lot into dependency hell, especially when there are circular dependencies are involved. Replace kernel.h inclusion with the list of what is really being used. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-11-26crypto: kdf - add SP800-108 counter key derivation functionStephan Müller
SP800-108 defines three KDFs - this patch provides the counter KDF implementation. The KDF is implemented as a service function where the caller has to maintain the hash / HMAC state. Apart from this hash/HMAC state, no additional state is required to be maintained by either the caller or the KDF implementation. The key for the KDF is set with the crypto_kdf108_setkey function which is intended to be invoked before the caller requests a key derivation operation via crypto_kdf108_ctr_generate. SP800-108 allows the use of either a HMAC or a hash as crypto primitive for the KDF. When a HMAC primtive is intended to be used, crypto_kdf108_setkey must be used to set the HMAC key. Otherwise, for a hash crypto primitve crypto_kdf108_ctr_generate can be used immediately after allocating the hash handle. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-11-26crypto: kdf - Add key derivation self-test support codeStephan Müller
As a preparation to add the key derivation implementations, the self-test data structure definition and the common test code is made available. The test framework follows the testing applied by the NIST CAVP test approach. The structure of the test code follows the implementations found in crypto/testmgr.c|h. In case the KDF implementations will be made available via a kernel crypto API templates, the test code is intended to be merged into testmgr.c|h. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-11-26crypto: drbg - reseed 'nopr' drbgs periodically from get_random_bytes()Nicolai Stange
In contrast to the fully prediction resistant 'pr' DRBGs, the 'nopr' variants get seeded once at boot and reseeded only rarely thereafter, namely only after 2^20 requests have been served each. AFAICT, this reseeding based on the number of requests served is primarily motivated by information theoretic considerations, c.f. NIST SP800-90Ar1, sec. 8.6.8 ("Reseeding"). However, given the relatively large seed lifetime of 2^20 requests, the 'nopr' DRBGs can hardly be considered to provide any prediction resistance whatsoever, i.e. to protect against threats like side channel leaks of the internal DRBG state (think e.g. leaked VM snapshots). This is expected and completely in line with the 'nopr' naming, but as e.g. the "drbg_nopr_hmac_sha512" implementation is potentially being used for providing the "stdrng" and thus, the crypto_default_rng serving the in-kernel crypto, it would certainly be desirable to achieve at least the same level of prediction resistance as get_random_bytes() does. Note that the chacha20 rngs underlying get_random_bytes() get reseeded every CRNG_RESEED_INTERVAL == 5min: the secondary, per-NUMA node rngs from the primary one and the primary rng in turn from the entropy pool, provided sufficient entropy is available. The 'nopr' DRBGs do draw randomness from get_random_bytes() for their initial seed already, so making them to reseed themselves periodically from get_random_bytes() in order to let them benefit from the latter's prediction resistance is not such a big change conceptually. In principle, it would have been also possible to make the 'nopr' DRBGs to periodically invoke a full reseeding operation, i.e. to also consider the jitterentropy source (if enabled) in addition to get_random_bytes() for the seed value. However, get_random_bytes() is relatively lightweight as compared to the jitterentropy generation process and thus, even though the 'nopr' reseeding is supposed to get invoked infrequently, it's IMO still worthwhile to avoid occasional latency spikes for drbg_generate() and stick to get_random_bytes() only. As an additional remark, note that drawing randomness from the non-SP800-90B-conforming get_random_bytes() only won't adversely affect SP800-90A conformance either: the very same is being done during boot via drbg_seed_from_random() already once rng_is_initialized() flips to true and it follows that if the DRBG implementation does conform to SP800-90A now, it will continue to do so. Make the 'nopr' DRBGs to reseed themselves periodically from get_random_bytes() every CRNG_RESEED_INTERVAL == 5min. More specifically, introduce a new member ->last_seed_time to struct drbg_state for recording in units of jiffies when the last seeding operation had taken place. Make __drbg_seed() maintain it and let drbg_generate() invoke a reseed from get_random_bytes() via drbg_seed_from_random() if more than 5min have passed by since the last seeding operation. Be careful to not to reseed if in testing mode though, or otherwise the drbg related tests in crypto/testmgr.c would fail to reproduce the expected output. In order to keep the formatting clean in drbg_generate() wrap the logic for deciding whether or not a reseed is due in a new helper, drbg_nopr_reseed_interval_elapsed(). Signed-off-by: Nicolai Stange <nstange@suse.de> Reviewed-by: Stephan Müller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-11-26crypto: drbg - make reseeding from get_random_bytes() synchronousNicolai Stange
get_random_bytes() usually hasn't full entropy available by the time DRBG instances are first getting seeded from it during boot. Thus, the DRBG implementation registers random_ready_callbacks which would in turn schedule some work for reseeding the DRBGs once get_random_bytes() has sufficient entropy available. For reference, the relevant history around handling DRBG (re)seeding in the context of a not yet fully seeded get_random_bytes() is: commit 16b369a91d0d ("random: Blocking API for accessing nonblocking_pool") commit 4c7879907edd ("crypto: drbg - add async seeding operation") commit 205a525c3342 ("random: Add callback API for random pool readiness") commit 57225e679788 ("crypto: drbg - Use callback API for random readiness") commit c2719503f5e1 ("random: Remove kernel blocking API") However, some time later, the initialization state of get_random_bytes() has been made queryable via rng_is_initialized() introduced with commit 9a47249d444d ("random: Make crng state queryable"). This primitive now allows for streamlining the DRBG reseeding from get_random_bytes() by replacing that aforementioned asynchronous work scheduling from random_ready_callbacks with some simpler, synchronous code in drbg_generate() next to the related logic already present therein. Apart from improving overall code readability, this change will also enable DRBG users to rely on wait_for_random_bytes() for ensuring that the initial seeding has completed, if desired. The previous patches already laid the grounds by making drbg_seed() to record at each DRBG instance whether it was being seeded at a time when rng_is_initialized() still had been false as indicated by ->seeded == DRBG_SEED_STATE_PARTIAL. All that remains to be done now is to make drbg_generate() check for this condition, determine whether rng_is_initialized() has flipped to true in the meanwhile and invoke a reseed from get_random_bytes() if so. Make this move: - rename the former drbg_async_seed() work handler, i.e. the one in charge of reseeding a DRBG instance from get_random_bytes(), to "drbg_seed_from_random()", - change its signature as appropriate, i.e. make it take a struct drbg_state rather than a work_struct and change its return type from "void" to "int" in order to allow for passing error information from e.g. its __drbg_seed() invocation onwards to callers, - make drbg_generate() invoke this drbg_seed_from_random() once it encounters a DRBG instance with ->seeded == DRBG_SEED_STATE_PARTIAL by the time rng_is_initialized() has flipped to true and - prune everything related to the former, random_ready_callback based mechanism. As drbg_seed_from_random() is now getting invoked from drbg_generate() with the ->drbg_mutex being held, it must not attempt to recursively grab it once again. Remove the corresponding mutex operations from what is now drbg_seed_from_random(). Furthermore, as drbg_seed_from_random() can now report errors directly to its caller, there's no need for it to temporarily switch the DRBG's ->seeded state to DRBG_SEED_STATE_UNSEEDED so that a failure of the subsequently invoked __drbg_seed() will get signaled to drbg_generate(). Don't do it then. Signed-off-by: Nicolai Stange <nstange@suse.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-11-26crypto: drbg - track whether DRBG was seeded with !rng_is_initialized()Nicolai Stange
Currently, the DRBG implementation schedules asynchronous works from random_ready_callbacks for reseeding the DRBG instances with output from get_random_bytes() once the latter has sufficient entropy available. However, as the get_random_bytes() initialization state can get queried by means of rng_is_initialized() now, there is no real need for this asynchronous reseeding logic anymore and it's better to keep things simple by doing it synchronously when needed instead, i.e. from drbg_generate() once rng_is_initialized() has flipped to true. Of course, for this to work, drbg_generate() would need some means by which it can tell whether or not rng_is_initialized() has flipped to true since the last seeding from get_random_bytes(). Or equivalently, whether or not the last seed from get_random_bytes() has happened when rng_is_initialized() was still evaluating to false. As it currently stands, enum drbg_seed_state allows for the representation of two different DRBG seeding states: DRBG_SEED_STATE_UNSEEDED and DRBG_SEED_STATE_FULL. The former makes drbg_generate() to invoke a full reseeding operation involving both, the rather expensive jitterentropy as well as the get_random_bytes() randomness sources. The DRBG_SEED_STATE_FULL state on the other hand implies that no reseeding at all is required for a !->pr DRBG variant. Introduce the new DRBG_SEED_STATE_PARTIAL state to enum drbg_seed_state for representing the condition that a DRBG was being seeded when rng_is_initialized() had still been false. In particular, this new state implies that - the given DRBG instance has been fully seeded from the jitterentropy source (if enabled) - and drbg_generate() is supposed to reseed from get_random_bytes() *only* once rng_is_initialized() turns to true. Up to now, the __drbg_seed() helper used to set the given DRBG instance's ->seeded state to constant DRBG_SEED_STATE_FULL. Introduce a new argument allowing for the specification of the to be written ->seeded value instead. Make the first of its two callers, drbg_seed(), determine the appropriate value based on rng_is_initialized(). The remaining caller, drbg_async_seed(), is known to get invoked only once rng_is_initialized() is true, hence let it pass constant DRBG_SEED_STATE_FULL for the new argument to __drbg_seed(). There is no change in behaviour, except for that the pr_devel() in drbg_generate() would now report "unseeded" for ->pr DRBG instances which had last been seeded when rng_is_initialized() was still evaluating to false. Signed-off-by: Nicolai Stange <nstange@suse.de> Reviewed-by: Stephan Müller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-11-26crypto: drbg - prepare for more fine-grained tracking of seeding stateNicolai Stange
There are two different randomness sources the DRBGs are getting seeded from, namely the jitterentropy source (if enabled) and get_random_bytes(). At initial DRBG seeding time during boot, the latter might not have collected sufficient entropy for seeding itself yet and thus, the DRBG implementation schedules a reseed work from a random_ready_callback once that has happened. This is particularly important for the !->pr DRBG instances, for which (almost) no further reseeds are getting triggered during their lifetime. Because collecting data from the jitterentropy source is a rather expensive operation, the aforementioned asynchronously scheduled reseed work restricts itself to get_random_bytes() only. That is, it in some sense amends the initial DRBG seed derived from jitterentropy output at full (estimated) entropy with fresh randomness obtained from get_random_bytes() once that has been seeded with sufficient entropy itself. With the advent of rng_is_initialized(), there is no real need for doing the reseed operation from an asynchronously scheduled work anymore and a subsequent patch will make it synchronous by moving it next to related logic already present in drbg_generate(). However, for tracking whether a full reseed including the jitterentropy source is required or a "partial" reseed involving only get_random_bytes() would be sufficient already, the boolean struct drbg_state's ->seeded member must become a tristate value. Prepare for this by introducing the new enum drbg_seed_state and change struct drbg_state's ->seeded member's type from bool to that type. For facilitating review, enum drbg_seed_state is made to only contain two members corresponding to the former ->seeded values of false and true resp. at this point: DRBG_SEED_STATE_UNSEEDED and DRBG_SEED_STATE_FULL. A third one for tracking the intermediate state of "seeded from jitterentropy only" will be introduced with a subsequent patch. There is no change in behaviour at this point. Signed-off-by: Nicolai Stange <nstange@suse.de> Reviewed-by: Stephan Müller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-10-29crypto: ecc - Export additional helper functionsDaniele Alessandrelli
Export the following additional ECC helper functions: - ecc_alloc_point() - ecc_free_point() - vli_num_bits() - ecc_point_is_zero() This is done to allow future ECC device drivers to re-use existing code, thus simplifying their implementation. Functions are exported using EXPORT_SYMBOL() (instead of EXPORT_SYMBOL_GPL()) to be consistent with the functions already exported by crypto/ecc.c. Exported functions are documented in include/crypto/internal/ecc.h. Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-10-29crypto: ecc - Move ecc.h to include/crypto/internalDaniele Alessandrelli
Move ecc.h header file to 'include/crypto/internal' so that it can be easily imported from everywhere in the kernel tree. This change is done to allow crypto device drivers to re-use the symbols exported by 'crypto/ecc.c', thus avoiding code duplication. Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-10-29crypto: engine - Add KPP Support to Crypto EnginePrabhjot Khurana
Add KPP support to the crypto engine queue manager, so that it can be used to simplify the logic of KPP device drivers as done for other crypto drivers. Signed-off-by: Prabhjot Khurana <prabhjot.khurana@intel.com> Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-08-30Merge branch 'linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "Algorithms: - Add AES-NI/AVX/x86_64 implementation of SM4. Drivers: - Add Arm SMCCC TRNG based driver" [ And obviously a lot of random fixes and updates - Linus] * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (84 commits) crypto: sha512 - remove imaginary and mystifying clearing of variables crypto: aesni - xts_crypt() return if walk.nbytes is 0 padata: Remove repeated verbose license text crypto: ccp - Add support for new CCP/PSP device ID crypto: x86/sm4 - add AES-NI/AVX2/x86_64 implementation crypto: x86/sm4 - export reusable AESNI/AVX functions crypto: rmd320 - remove rmd320 in Makefile crypto: skcipher - in_irq() cleanup crypto: hisilicon - check _PS0 and _PR0 method crypto: hisilicon - change parameter passing of debugfs function crypto: hisilicon - support runtime PM for accelerator device crypto: hisilicon - add runtime PM ops crypto: hisilicon - using 'debugfs_create_file' instead of 'debugfs_create_regset32' crypto: tcrypt - add GCM/CCM mode test for SM4 algorithm crypto: testmgr - Add GCM/CCM mode test of SM4 algorithm crypto: tcrypt - Fix missing return value check crypto: hisilicon/sec - modify the hardware endian configuration crypto: hisilicon/sec - fix the abnormal exiting process crypto: qat - store vf.compatible flag crypto: qat - do not export adf_iov_putmsg() ...
2021-08-23crypto: public_key: fix overflow during implicit conversionzhenwei pi
Hit kernel warning like this, it can be reproduced by verifying 256 bytes datafile by keyctl command, run script: RAWDATA=rawdata SIGDATA=sigdata modprobe pkcs8_key_parser rm -rf *.der *.pem *.pfx rm -rf $RAWDATA dd if=/dev/random of=$RAWDATA bs=256 count=1 openssl req -nodes -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem \ -subj "/C=CN/ST=GD/L=SZ/O=vihoo/OU=dev/CN=xx.com/emailAddress=yy@xx.com" KEY_ID=`openssl pkcs8 -in key.pem -topk8 -nocrypt -outform DER | keyctl \ padd asymmetric 123 @s` keyctl pkey_sign $KEY_ID 0 $RAWDATA enc=pkcs1 hash=sha1 > $SIGDATA keyctl pkey_verify $KEY_ID 0 $RAWDATA $SIGDATA enc=pkcs1 hash=sha1 Then the kernel reports: WARNING: CPU: 5 PID: 344556 at crypto/rsa-pkcs1pad.c:540 pkcs1pad_verify+0x160/0x190 ... Call Trace: public_key_verify_signature+0x282/0x380 ? software_key_query+0x12d/0x180 ? keyctl_pkey_params_get+0xd6/0x130 asymmetric_key_verify_signature+0x66/0x80 keyctl_pkey_verify+0xa5/0x100 do_syscall_64+0x35/0xb0 entry_SYSCALL_64_after_hwframe+0x44/0xae The reason of this issue, in function 'asymmetric_key_verify_signature': '.digest_size(u8) = params->in_len(u32)' leads overflow of an u8 value, so use u32 instead of u8 for digest_size field. And reorder struct public_key_signature, it saves 8 bytes on a 64-bit machine. Cc: stable@vger.kernel.org Signed-off-by: zhenwei pi <pizhenwei@bytedance.com> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
2021-07-30crypto: arm64/sm4-ce - Make dependent on sm4 library instead of sm4-genericTianjia Zhang
SM4 library is abstracted from sm4-generic algorithm, sm4-ce can depend on the SM4 library instead of sm4-generic, and some functions in sm4-generic do not need to be exported. Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-07-30crypto: sm4 - create SM4 library based on sm4 generic codeTianjia Zhang
Take the existing small footprint and mostly time invariant C code and turn it into a SM4 library that can be used for non-performance critical, casual use of SM4, and as a fallback for, e.g., SIMD code that needs a secondary path that can be taken in contexts where the SIMD unit is off limits. Secondly, some codes have been optimized, such as unrolling small times loop, removing unnecessary memory shifts, exporting sbox, fk, ck arrays, and basic encryption and decryption functions. Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-06-28crypto: scatterwalk - Remove obsolete PageSlab checkHerbert Xu
As it is now legal to call flush_dcache_page on slab pages we no longer need to do the check in the Crypto API. Reported-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-06-24crypto: api - Move crypto attr definitions out of crypto.hHerbert Xu
The definitions for crypto_attr-related types and enums are not needed by most Crypto API users. This patch moves them out of crypto.h and into algapi.h/internal.h depending on the extent of their use. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-06-17crypto: api - remove CRYPTOA_U32 and related functionsLiu Shixin
According to the advice of Eric and Herbert, type CRYPTOA_U32 has been unused for over a decade, so remove the code related to CRYPTOA_U32. After removing CRYPTOA_U32, the type of the variable attrs can be changed from union to struct. Signed-off-by: Liu Shixin <liushixin2@huawei.com> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-06-17crypto: shash - avoid comparing pointers to exported functions under CFIArd Biesheuvel
crypto_shash_alg_has_setkey() is implemented by testing whether the .setkey() member of a struct shash_alg points to the default version, called shash_no_setkey(). As crypto_shash_alg_has_setkey() is a static inline, this requires shash_no_setkey() to be exported to modules. Unfortunately, when building with CFI, function pointers are routed via CFI stubs which are private to each module (or to the kernel proper) and so this function pointer comparison may fail spuriously. Let's fix this by turning crypto_shash_alg_has_setkey() into an out of line function. Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-05-28crypto: header - Fix spelling errorsZhen Lei
Fix some spelling mistakes in comments: cipherntext ==> ciphertext syncronise ==> synchronise feeded ==> fed Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-04-02crypto: poly1305 - fix poly1305_core_setkey() declarationArnd Bergmann
gcc-11 points out a mismatch between the declaration and the definition of poly1305_core_setkey(): lib/crypto/poly1305-donna32.c:13:67: error: argument 2 of type ‘const u8[16]’ {aka ‘const unsigned char[16]’} with mismatched bound [-Werror=array-parameter=] 13 | void poly1305_core_setkey(struct poly1305_core_key *key, const u8 raw_key[16]) | ~~~~~~~~~^~~~~~~~~~~ In file included from lib/crypto/poly1305-donna32.c:11: include/crypto/internal/poly1305.h:21:68: note: previously declared as ‘const u8 *’ {aka ‘const unsigned char *’} 21 | void poly1305_core_setkey(struct poly1305_core_key *key, const u8 *raw_key); This is harmless in principle, as the calling conventions are the same, but the more specific prototype allows better type checking in the caller. Change the declaration to match the actual function definition. The poly1305_simd_init() is a bit suspicious here, as it previously had a 32-byte argument type, but looks like it needs to take the 16-byte POLY1305_BLOCK_SIZE array instead. Fixes: 1c08a104360f ("crypto: poly1305 - add new 32 and 64-bit generic versions") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-04-02random: initialize ChaCha20 constants with correct endiannessEric Biggers
On big endian CPUs, the ChaCha20-based CRNG is using the wrong endianness for the ChaCha20 constants. This doesn't matter cryptographically, but technically it means it's not ChaCha20 anymore. Fix it to always use the standard constants. Cc: linux-crypto@vger.kernel.org Cc: Andy Lutomirski <luto@kernel.org> Cc: Jann Horn <jannh@google.com> Cc: Theodore Ts'o <tytso@mit.edu> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-03-26Merge branch 'ecc'Herbert Xu
This pulls in the NIST P384/256/192 x509 changes.
2021-03-26crypto: ecc - Add NIST P384 curve parametersSaulo Alessandre
Add the parameters for the NIST P384 curve and define a new curve ID for it. Make the curve available in ecc_get_curve. Summary of changes: * crypto/ecc_curve_defs.h - add nist_p384 params * include/crypto/ecdh.h - add ECC_CURVE_NIST_P384 * crypto/ecc.c - change ecc_get_curve to accept nist_p384 Signed-off-by: Saulo Alessandre <saulo.alessandre@tse.jus.br> Tested-by: Stefan Berger <stefanb@linux.ibm.com> Acked-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-03-13crypto: ecc - add curve25519 params and expose themMeng Yu
1. Add curve 25519 parameters in 'crypto/ecc_curve_defs.h'; 2. Add curve25519 interface 'ecc_get_curve25519_param' in 'include/crypto/ecc_curve.h', to make its parameters be exposed to everyone in kernel tree. Signed-off-by: Meng Yu <yumeng18@huawei.com> Reviewed-by: Zaibo Xu <xuzaibo@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-03-13crypto: ecc - expose ecc curvesMeng Yu
Move 'ecc_get_curve' to 'include/crypto/ecc_curve.h', so everyone in kernel tree can easily get ecc curve params; Signed-off-by: Meng Yu <yumeng18@huawei.com> Reviewed-by: Zaibo Xu <xuzaibo@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-03-13crypto: ecdh - move curve_id of ECDH from the key to algorithm nameMeng Yu
1. crypto and crypto/atmel-ecc: Move curve id of ECDH from the key into the algorithm name instead in crypto and atmel-ecc, so ECDH algorithm name change form 'ecdh' to 'ecdh-nist-pxxx', and we cannot use 'curve_id' in 'struct ecdh'; 2. crypto/testmgr and net/bluetooth: Modify 'testmgr.c', 'testmgr.h' and 'net/bluetooth' to adapt the modification. Signed-off-by: Meng Yu <yumeng18@huawei.com> Reviewed-by: Zaibo Xu <xuzaibo@huawei.com> Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-03-13crypto: api - check for ERR pointers in crypto_destroy_tfm()Ard Biesheuvel
Given that crypto_alloc_tfm() may return ERR pointers, and to avoid crashes on obscure error paths where such pointers are presented to crypto_destroy_tfm() (such as [0]), add an ERR_PTR check there before dereferencing the second argument as a struct crypto_tfm pointer. [0] https://lore.kernel.org/linux-crypto/000000000000de949705bc59e0f6@google.com/ Reported-by: syzbot+12cf5fbfdeba210a89dd@syzkaller.appspotmail.com Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-02-23Merge tag 'keys-misc-20210126' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs Pull keyring updates from David Howells: "Here's a set of minor keyrings fixes/cleanups that I've collected from various people for the upcoming merge window. A couple of them might, in theory, be visible to userspace: - Make blacklist_vet_description() reject uppercase letters as they don't match the all-lowercase hex string generated for a blacklist search. This may want reconsideration in the future, but, currently, you can't add to the blacklist keyring from userspace and the only source of blacklist keys generates lowercase descriptions. - Fix blacklist_init() to use a new KEY_ALLOC_* flag to indicate that it wants KEY_FLAG_KEEP to be set rather than passing KEY_FLAG_KEEP into keyring_alloc() as KEY_FLAG_KEEP isn't a valid alloc flag. This isn't currently a problem as the blacklist keyring isn't currently writable by userspace. The rest of the patches are cleanups and I don't think they should have any visible effect" * tag 'keys-misc-20210126' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs: watch_queue: rectify kernel-doc for init_watch() certs: Replace K{U,G}IDT_INIT() with GLOBAL_ROOT_{U,G}ID certs: Fix blacklist flag type confusion PKCS#7: Fix missing include certs: Fix blacklisted hexadecimal hash string check certs/blacklist: fix kernel doc interface issue crypto: public_key: Remove redundant header file from public_key.h keys: remove trailing semicolon in macro definition crypto: pkcs7: Use match_string() helper to simplify the code PKCS#7: drop function from kernel-doc pkcs7_validate_trust_one encrypted-keys: Replace HTTP links with HTTPS ones crypto: asymmetric_keys: fix some comments in pkcs7_parser.h KEYS: remove redundant memset security: keys: delete repeated words in comments KEYS: asymmetric: Fix kerneldoc security/keys: use kvfree_sensitive() watch_queue: Drop references to /dev/watch_queue keys: Remove outdated __user annotations security: keys: Fix fall-through warnings for Clang
2021-01-22crypto - shash: reduce minimum alignment of shash_desc structureArd Biesheuvel
Unlike many other structure types defined in the crypto API, the 'shash_desc' structure is permitted to live on the stack, which implies its contents may not be accessed by DMA masters. (This is due to the fact that the stack may be located in the vmalloc area, which requires a different virtual-to-physical translation than the one implemented by the DMA subsystem) Our definition of CRYPTO_MINALIGN_ATTR is based on ARCH_KMALLOC_MINALIGN, which may take DMA constraints into account on architectures that support non-cache coherent DMA such as ARM and arm64. In this case, the value is chosen to reflect the largest cacheline size in the system, in order to ensure that explicit cache maintenance as required by non-coherent DMA masters does not affect adjacent, unrelated slab allocations. On arm64, this value is currently set at 128 bytes. This means that applying CRYPTO_MINALIGN_ATTR to struct shash_desc is both unnecessary (as it is never used for DMA), and undesirable, given that it wastes stack space (on arm64, performing the alignment costs 112 bytes in the worst case, and the hole between the 'tfm' and '__ctx' members takes up another 120 bytes, resulting in an increased stack footprint of up to 232 bytes.) So instead, let's switch to the minimum SLAB alignment, which does not take DMA constraints into account. Note that this is a no-op for x86. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-21crypto: public_key: Remove redundant header file from public_key.hTianjia Zhang
The akcipher.h header file was originally introduced in SM2, and then the definition of SM2 was moved to the existing code. This header file is left and should be removed. Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Ben Boeckel <mathstuf@gmail.com>
2021-01-14crypto: x86 - remove glue helper moduleArd Biesheuvel
All dependencies on the x86 glue helper module have been replaced by local instantiations of the new ECB/CBC preprocessor helper macros, so the glue helper module can be retired. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03crypto: blake2b - sync with blake2s implementationEric Biggers
Sync the BLAKE2b code with the BLAKE2s code as much as possible: - Move a lot of code into new headers <crypto/blake2b.h> and <crypto/internal/blake2b.h>, and adjust it to be like the corresponding BLAKE2s code, i.e. like <crypto/blake2s.h> and <crypto/internal/blake2s.h>. - Rename constants, e.g. BLAKE2B_*_DIGEST_SIZE => BLAKE2B_*_HASH_SIZE. - Use a macro BLAKE2B_ALG() to define the shash_alg structs. - Export blake2b_compress_generic() for use as a fallback. This makes it much easier to add optimized implementations of BLAKE2b, as optimized implementations can use the helper functions crypto_blake2b_{setkey,init,update,final}() and blake2b_compress_generic(). The ARM implementation will use these. But this change is also helpful because it eliminates unnecessary differences between the BLAKE2b and BLAKE2s code, so that the same improvements can easily be made to both. (The two algorithms are basically identical, except for the word size and constants.) It also makes it straightforward to add a library API for BLAKE2b in the future if/when it's needed. This change does make the BLAKE2b code slightly more complicated than it needs to be, as it doesn't actually provide a library API yet. For example, __blake2b_update() doesn't really need to exist yet; it could just be inlined into crypto_blake2b_update(). But I believe this is outweighed by the benefits of keeping the code in sync. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03crypto: blake2s - include <linux/bug.h> instead of <asm/bug.h>Eric Biggers
Address the following checkpatch warning: WARNING: Use #include <linux/bug.h> instead of <asm/bug.h> Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03crypto: blake2s - adjust include guard namingEric Biggers
Use the full path in the include guards for the BLAKE2s headers to avoid ambiguity and to match the convention for most files in include/crypto/. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03crypto: blake2s - add comment for blake2s_state fieldsEric Biggers
The first three fields of 'struct blake2s_state' are used in assembly code, which isn't immediately obvious, so add a comment to this effect. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03crypto: blake2s - optimize blake2s initializationEric Biggers
If no key was provided, then don't waste time initializing the block buffer, as its initial contents won't be used. Also, make crypto_blake2s_init() and blake2s() call a single internal function __blake2s_init() which treats the key as optional, rather than conditionally calling blake2s_init() or blake2s_init_key(). This reduces the compiled code size, as previously both blake2s_init() and blake2s_init_key() were being inlined into these two callers, except when the key size passed to blake2s() was a compile-time constant. These optimizations aren't that significant for BLAKE2s. However, the equivalent optimizations will be more significant for BLAKE2b, as everything is twice as big in BLAKE2b. And it's good to keep things consistent rather than making optimizations for BLAKE2b but not BLAKE2s. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03crypto: blake2s - share the "shash" API boilerplate codeEric Biggers
Add helper functions for shash implementations of BLAKE2s to include/crypto/internal/blake2s.h, taking advantage of __blake2s_update() and __blake2s_final() that were added by the previous patch to share more code between the library and shash implementations. crypto_blake2s_setkey() and crypto_blake2s_init() are usable as shash_alg::setkey and shash_alg::init directly, while crypto_blake2s_update() and crypto_blake2s_final() take an extra 'blake2s_compress_t' function pointer parameter. This allows the implementation of the compression function to be overridden, which is the only part that optimized implementations really care about. The new functions are inline functions (similar to those in sha1_base.h, sha256_base.h, and sm3_base.h) because this avoids needing to add a new module blake2s_helpers.ko, they aren't *too* long, and this avoids indirect calls which are expensive these days. Note that they can't go in blake2s_generic.ko, as that would require selecting CRYPTO_BLAKE2S from CRYPTO_BLAKE2S_X86, which would cause a recursive dependency. Finally, use these new helper functions in the x86 implementation of BLAKE2s. (This part should be a separate patch, but unfortunately the x86 implementation used the exact same function names like "crypto_blake2s_update()", so it had to be updated at the same time.) Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03crypto: blake2s - move update and final logic to internal/blake2s.hEric Biggers
Move most of blake2s_update() and blake2s_final() into new inline functions __blake2s_update() and __blake2s_final() in include/crypto/internal/blake2s.h so that this logic can be shared by the shash helper functions. This will avoid duplicating this logic between the library and shash implementations. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03crypto: remove cipher routines from public crypto APIArd Biesheuvel
The cipher routines in the crypto API are mostly intended for templates implementing skcipher modes generically in software, and shouldn't be used outside of the crypto subsystem. So move the prototypes and all related definitions to a new header file under include/crypto/internal. Also, let's use the new module namespace feature to move the symbol exports into a new namespace CRYPTO_INTERNAL. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-12-04crypto: lib/blake2s - Move selftest prototype into header fileHerbert Xu
This patch fixes a missing prototype warning on blake2s_selftest. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-11-20crypto: lib/curve25519 - Move selftest prototype into header fileHerbert Xu
This patch moves the curve25519_selftest into curve25519.h so we don't get a warning from gcc complaining about a missing prototype. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-11-20crypto: sha - split sha.h into sha1.h and sha2.hEric Biggers
Currently <crypto/sha.h> contains declarations for both SHA-1 and SHA-2, and <crypto/sha3.h> contains declarations for SHA-3. This organization is inconsistent, but more importantly SHA-1 is no longer considered to be cryptographically secure. So to the extent possible, SHA-1 shouldn't be grouped together with any of the other SHA versions, and usage of it should be phased out. Therefore, split <crypto/sha.h> into two headers <crypto/sha1.h> and <crypto/sha2.h>, and make everyone explicitly specify whether they want the declarations for SHA-1, SHA-2, or both. This avoids making the SHA-1 declarations visible to files that don't want anything to do with SHA-1. It also prepares for potentially moving sha1.h into a new insecure/ or dangerous/ directory. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-11-06crypto: aead - add crypto_aead_driver_name()Eric Biggers
Add crypto_aead_driver_name(), which is analogous to crypto_skcipher_driver_name(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>