author | Andrea Di Biagio <Andrea_DiBiagio@sn.scee.net> | |
Thu, 12 Dec 2013 11:50:47 +0000 (11:50 +0000) | ||
committer | Andrea Di Biagio <Andrea_DiBiagio@sn.scee.net> | |
Thu, 12 Dec 2013 11:50:47 +0000 (11:50 +0000) | ||
commit | a29b054e7a88fefb17b099a2e7727897f8b3743a | |
tree | 2f58a9f41140e0c94c3883f8cc03e3fc82de2ce2 | tree | snapshot (tar.xz tar.gz zip) |
parent | b4605d4d4b52b8dd678779a6cc47551c34c8b74d | commit | diff |
Added new X86 patterns to select SSE scalar fp arithmetic instructions from
a vector packed single/double fp operation followed by a vector insert.
The effect is that the backend coverts the packed fp instruction
followed by a vectro insert into a SSE or AVX scalar fp instruction.
For example, given the following code:
__m128 foo(__m128 A, __m128 B) {
__m128 C = A + B;
return (__m128) {c[0], a[1], a[2], a[3]};
}
previously we generated:
addps %xmm0, %xmm1
movss %xmm1, %xmm0
we now generate:
addss %xmm1, %xmm0
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197145 91177308-0d34-0410-b5e6-96231b3b80d8
a vector packed single/double fp operation followed by a vector insert.
The effect is that the backend coverts the packed fp instruction
followed by a vectro insert into a SSE or AVX scalar fp instruction.
For example, given the following code:
__m128 foo(__m128 A, __m128 B) {
__m128 C = A + B;
return (__m128) {c[0], a[1], a[2], a[3]};
}
previously we generated:
addps %xmm0, %xmm1
movss %xmm1, %xmm0
we now generate:
addss %xmm1, %xmm0
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197145 91177308-0d34-0410-b5e6-96231b3b80d8
lib/Target/X86/X86InstrSSE.td | diff | blob | history | |
test/CodeGen/X86/sse-scalar-fp-arith-2.ll | [new file with mode: 0644] | blob |