diff options
author | Herbert Xu <herbert@gondor.apana.org.au> | 2008-01-11 08:09:35 +1100 |
---|---|---|
committer | Herbert Xu <herbert@gondor.apana.org.au> | 2008-01-11 08:09:35 +1100 |
commit | 490fe3f05be3f7c87d7932bcb6e6e53e3db2cd9c (patch) | |
tree | 9b919a9b05daf85d1bd410ce0a4ada482912cbd9 /crypto | |
parent | d4a7dd8e637b322faaa934ffcd6dd07711af831f (diff) |
[CRYPTO] padlock: Fix alignment fault in aes_crypt_copy
The previous patch fixed spurious read faults from occuring by copying
the data if we happen to have a single block at the end of a page. It
appears that gcc cannot guarantee 16-byte alignment in the kernel with
__attribute__. The following report from Torben Viets shows a buffer
that's only 8-byte aligned:
> eneral protection fault: 0000 [#1]
> Modules linked in: xt_TCPMSS xt_tcpmss iptable_mangle ipt_MASQUERADE
> xt_tcpudp xt_mark xt_state iptable_nat nf_nat nf_conntrack_ipv4
> iptable_filter ip_tables x_tables pppoe pppox af_packet ppp_generic slhc
> aes_i586
> CPU: 0
> EIP: 0060:[<c035b828>] Not tainted VLI
> EFLAGS: 00010292 (2.6.23.12 #7)
> EIP is at aes_crypt_copy+0x28/0x40
> eax: f7639ff0 ebx: f6c24050 ecx: 00000001 edx: f6c24030
> esi: f7e89dc8 edi: f7639ff0 ebp: 00010000 esp: f7e89dc8
Since the hardware must have 16-byte alignment, the following patch fixes
this by open coding the alignment adjustment.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Diffstat (limited to 'crypto')
0 files changed, 0 insertions, 0 deletions