EuroAssembler Index Manual Download Source Macros


Sitemap Links Forum Tests Projects

iiy.htm
Enumerations
IiyList
Instruction handlers
CLEVICT0 CLEVICT1 DELAY SPFLT TZCNTI VADDNPD VADDNPS VADDSETSPS VANDNPD VANDNPS VANDPD VANDPS VBLENDPD VBLENDPS VBLENDVPD VBLENDVPS VCVTFXPNTDQ2PS VCVTFXPNTPD2DQ VCVTFXPNTPD2UDQ VCVTFXPNTPS2DQ VCVTFXPNTPS2UDQ VCVTFXPNTUDQ2PS VEXP223PS VEXP2PD VEXP2PS VFIXUPNANPD VFIXUPNANPS VGMAXABSPS VGMAXPD VGMAXPS VGMINPD VGMINPS VLDDQU VLDMXCSR VLDQQU VLOADUNPACKHD VLOADUNPACKHPD VLOADUNPACKHPS VLOADUNPACKHQ VLOADUNPACKLD VLOADUNPACKLPD VLOADUNPACKLPS VLOADUNPACKLQ VLOG2PS VMASKMOVDQU VMASKMOVPD VMASKMOVPS VORPD VORPS VPACKSSDW VPACKSSWB VPACKSTOREHD VPACKSTOREHPD VPACKSTOREHPS VPACKSTOREHQ VPACKSTORELD VPACKSTORELPD VPACKSTORELPS VPACKSTORELQ VPACKUSDW VPACKUSWB VPADCD VPADDSETCD VPADDSETSD VPALIGNR VPAND VPANDD VPANDN VPANDND VPANDNQ VPANDQ VPAVGB VPAVGW VPBLENDD VPBLENDVB VPBLENDW VPBROADCASTMB2Q VPBROADCASTMW2D VPCMPB VPCMPD VPCMPEQB VPCMPEQD VPCMPEQQ VPCMPEQUB VPCMPEQUD VPCMPEQUQ VPCMPEQUW VPCMPEQW VPCMPFALSEB VPCMPFALSED VPCMPFALSEQ VPCMPFALSEUB VPCMPFALSEUD VPCMPFALSEUQ VPCMPFALSEUW VPCMPFALSEW VPCMPGTB VPCMPGTD VPCMPGTQ VPCMPGTW VPCMPLEB VPCMPLED VPCMPLEQ VPCMPLEUB VPCMPLEUD VPCMPLEUQ VPCMPLEUW VPCMPLEW VPCMPLTB VPCMPLTD VPCMPLTQ VPCMPLTUB VPCMPLTUD VPCMPLTUQ VPCMPLTUW VPCMPLTW VPCMPNEQB VPCMPNEQD VPCMPNEQQ VPCMPNEQUB VPCMPNEQUD VPCMPNEQUQ VPCMPNEQUW VPCMPNEQW VPCMPNLEB VPCMPNLED VPCMPNLEQ VPCMPNLEUB VPCMPNLEUD VPCMPNLEUQ VPCMPNLEUW VPCMPNLEW VPCMPNLTB VPCMPNLTD VPCMPNLTQ VPCMPNLTUB VPCMPNLTUD VPCMPNLTUQ VPCMPNLTUW VPCMPNLTW VPCMPQ VPCMPTRUEB VPCMPTRUED VPCMPTRUEQ VPCMPTRUEUB VPCMPTRUEUD VPCMPTRUEUQ VPCMPTRUEUW VPCMPTRUEW VPCMPUB VPCMPUD VPCMPUQ VPCMPUW VPCMPW VPCONFLICTD VPCONFLICTQ VPERM2F128 VPERM2I128 VPERMB VPERMD VPERMF32X4 VPERMI2B VPERMI2D VPERMI2PD VPERMI2PS VPERMI2Q VPERMI2W VPERMILPD VPERMILPS VPERMPD VPERMPS VPERMQ VPERMT2B VPERMT2D VPERMT2PD VPERMT2PS VPERMT2Q VPERMT2W VPERMW VPHADDD VPHADDSW VPHADDW VPHSUBD VPHSUBSW VPHSUBW VPLZCNTD VPLZCNTQ VPMADD231D VPMADD233D VPMADD52HUQ VPMADD52LUQ VPMADDUBSW VPMASKMOVD VPMASKMOVQ VPMOVB2M VPMOVD2M VPMOVDB VPMOVDW VPMOVM2B VPMOVM2D VPMOVM2Q VPMOVM2W VPMOVMSKB VPMOVQ2M VPMOVQB VPMOVQD VPMOVQW VPMOVSDB VPMOVSDW VPMOVSQB VPMOVSQD VPMOVSQW VPMOVSWB VPMOVSXBD VPMOVSXBQ VPMOVSXBW VPMOVSXDQ VPMOVSXWD VPMOVSXWQ VPMOVUSDB VPMOVUSDW VPMOVUSQB VPMOVUSQD VPMOVUSQW VPMOVUSWB VPMOVW2M VPMOVWB VPMOVZXBD VPMOVZXBQ VPMOVZXBW VPMOVZXDQ VPMOVZXWD VPMOVZXWQ VPMULDQ VPMULHD VPMULHRSW VPMULHUD VPMULHUW VPMULHW VPMULLD VPMULLQ VPMULLW VPMULTISHIFTQB VPOR VPORD VPORQ VPREFETCH0 VPREFETCH1 VPREFETCH2 VPREFETCHE0 VPREFETCHE1 VPREFETCHE2 VPREFETCHENTA VPREFETCHNTA VPROLD VPROLQ VPROLVD VPROLVQ VPRORD VPRORQ VPRORVD VPRORVQ VPSBBD VPSBBRD VPSUBRD VPSUBRSETBD VPSUBSETBD VPTERNLOGD VPTERNLOGQ VPTESTMB VPTESTMD VPTESTMQ VPTESTMW VPTESTNMB VPTESTNMD VPTESTNMQ VPTESTNMW VPUNPCKHBW VPUNPCKHDQ VPUNPCKHQDQ VPUNPCKHWD VPUNPCKLBW VPUNPCKLDQ VPUNPCKLQDQ VPUNPCKLWD VPXOR VPXORD VPXORQ VRANGEPD VRANGEPS VRANGESD VRANGESS VRCP14PD VRCP14PS VRCP14SD VRCP14SS VRCP23PS VRCP28PD VRCP28PS VRCP28SD VRCP28SS VRCPSS VREDUCEPD VREDUCEPS VREDUCESD VREDUCESS VRNDFXPNTPD VRNDFXPNTPS VRNDSCALEPD VRNDSCALEPS VRNDSCALESD VRNDSCALESS VROUNDPD VROUNDPS VROUNDSD VROUNDSS VRSQRT14PD VRSQRT14PS VRSQRT14SD VRSQRT14SS VRSQRT23PS VRSQRT28PD VRSQRT28PS VRSQRT28SD VRSQRT28SS VRSQRTPS VRSQRTSS VSCALEFPD VSCALEFPS VSCALEFSD VSCALEFSS VSCALEPS VSQRTPD VSQRTPS VSQRTSD VSQRTSS VSTMXCSR VSUBRPD VSUBRPS VUNPCKHPD VUNPCKHPS VUNPCKLPD VUNPCKLPS VXORPD VXORPS

↑ IiyHandlers
assemble VEX-encodable AVX machine instructions.
See also
IiHandlers, [IntelVol2] [IntelAVX512].
iiy PROGRAM FORMAT=COFF,MODEL=FLAT,WIDTH=32,MAXPASSES=32
 INCLUDEHEAD "euroasm.htm" ; Interface (structures, symbols and macros) of other modules.

iiy HEAD ; Start of module interface.
↑ %IiyList
enumerates machine instructions of this family which €ASM can assemble.
Each instruction declared in %IiyList requires the corresponding handler in this file.
See also
DictLookupIi
%IiyList %SET \
VANDPS, \
VANDNPS, \
VORPS, \
VXORPS, \
VANDPD, \
VANDNPD, \
VORPD, \
VXORPD, \
VSQRTSS, \
VSQRTSD, \
VSQRTPS, \
VSQRTPD, \
VRCP14SS, \
VRCP14SD, \
VRCP14PS, \
VRCP14PD, \
VRSQRT14SS, \
VRSQRT14SD, \
VRSQRT14PS, \
VRSQRT14PD, \
VRCP28SS, \
VRCP28SD, \
VRCP28PS, \
VRCP28PD, \
VRSQRT28SS, \
VRSQRT28SD, \
VRSQRT28PS, \
VRSQRT28PD, \
VEXP2PS, \
VEXP2PD, \
VPMOVUSWB, \
VPMOVUSDB, \
VPMOVUSQB, \
VPMOVUSDW, \
VPMOVUSQW, \
VPMOVUSQD, \
VPMOVSWB, \
VPMOVSDB, \
VPMOVSQB, \
VPMOVSDW, \
VPMOVSQW, \
VPMOVSQD, \
VPMOVWB, \
VPMOVDB, \
VPMOVQB, \
VPMOVDW, \
VPMOVQW, \
VPMOVQD, \
VPMOVSXBD, \
VPMOVSXBQ, \
VPMOVSXWD, \
VPMOVSXWQ, \
VPMOVSXDQ, \
VPMOVSXBW, \
VPMOVZXBW, \
VPMOVZXBD, \
VPMOVZXBQ, \
VPMOVZXWD, \
VPMOVZXWQ, \
VPMOVZXDQ, \
VPMULDQ, \
VPMULHRSW, \
VPMULHUW, \
VPMULHW, \
VPMULLD, \
VPMULLQ, \
VPMULLW, \
VPAVGB, \
VPAVGW, \
VPMASKMOVD, \
VPMASKMOVQ, \
VMASKMOVPS, \
VMASKMOVPD, \
VMASKMOVDQU, \
VPMOVMSKB, \
VPBLENDW, \
VPBLENDD, \
VBLENDPS, \
VBLENDPD, \
VBLENDVPS, \
VBLENDVPD, \
VPBLENDVB, \
VLDDQU, \
VLDQQU, \
VLDMXCSR, \
VSTMXCSR, \
VRSQRTSS, \
VRCPSS, \
VRSQRTPS, \
VRCPPS, \
VUNPCKLPS, \
VUNPCKHPS, \
VUNPCKLPD, \
VUNPCKHPD, \
VPUNPCKLBW, \
VPUNPCKLWD, \
VPUNPCKLDQ, \
VPUNPCKLQDQ, \
VPUNPCKHBW, \
VPUNPCKHWD, \
VPUNPCKHDQ, \
VPUNPCKHQDQ, \
VPACKSSWB, \
VPACKSSDW, \
VPACKUSWB, \
VPACKUSDW, \
VSCALEFSS, \
VSCALEFSD, \
VSCALEFPS, \
VSCALEFPD, \
VSCALEPS, \
VRNDSCALESS, \
VRNDSCALESD, \
VRNDSCALEPS, \
VRNDSCALEPD, \
VROUNDSS, \
VROUNDSD, \
VROUNDPS, \
VROUNDPD, \
VPMADDUBSW, \
VPMADD52LUQ, \
VPMADD52HUQ, \
VPHADDW, \
VPHADDD, \
VPHADDSW, \
VPHSUBW, \
VPHSUBD, \
VPHSUBSW, \
VPAND, \
VPANDD, \
VPANDQ, \
VPOR, \
VPORD, \
VPORQ, \
VPANDN, \
VPANDND, \
VPANDNQ, \
VPXOR, \
VPXORD, \
VPXORQ, \
VRANGESS, \
VRANGESD, \
VRANGEPS, \
VRANGEPD, \
VREDUCESS, \
VREDUCESD, \
VREDUCEPS, \
VREDUCEPD, \
VPRORVQ, \
VPRORVD, \
VPROLVD, \
VPROLVQ, \
VPRORD, \
VPROLD, \
VPRORQ, \
VPROLQ, \
VPERMI2B, \
VPERMI2D, \
VPERMI2Q, \
VPERMI2PS, \
VPERMI2PD, \
VPERMI2W, \
VPERMT2B, \
VPERMT2W, \
VPERMT2D, \
VPERMT2Q, \
VPERMT2PS, \
VPERMT2PD, \
VPERMB, \
VPERMW, \
VPERMD, \
VPERMQ, \
VPERMPS, \
VPERMPD, \
VPERMILPS, \
VPERMILPD, \
VPERM2F128, \
VPERM2I128, \
VPTESTMB, \
VPTESTMW, \
VPTESTMD, \
VPTESTMQ, \
VPTESTNMB, \
VPTESTNMW, \
VPTESTNMD, \
VPTESTNMQ, \
VPTERNLOGD, \
VPTERNLOGQ, \
VPALIGNR, \
VPCMPB, \
VPCMPUB, \
VPCMPW, \
VPCMPUW, \
VPCMPD, \
VPCMPUD, \
VPCMPQ, \
VPCMPUQ, \
VPCMPEQB, \
VPCMPLTB, \
VPCMPLEB, \
VPCMPFALSEB, \
VPCMPNEQB, \
VPCMPNLTB, \
VPCMPNLEB, \
VPCMPTRUEB, \
VPCMPEQUB, \
VPCMPLTUB, \
VPCMPLEUB, \
VPCMPFALSEUB, \
VPCMPNEQUB, \
VPCMPNLTUB, \
VPCMPNLEUB, \
VPCMPTRUEUB, \
VPCMPEQW, \
VPCMPLTW, \
VPCMPLEW, \
VPCMPFALSEW, \
VPCMPNEQW, \
VPCMPNLTW, \
VPCMPNLEW, \
VPCMPTRUEW, \
VPCMPEQUW, \
VPCMPLTUW, \
VPCMPLEUW, \
VPCMPFALSEUW, \
VPCMPNEQUW, \
VPCMPNLTUW, \
VPCMPNLEUW, \
VPCMPTRUEUW, \
VPCMPEQD, \
VPCMPLTD, \
VPCMPLED, \
VPCMPFALSED, \
VPCMPNEQD, \
VPCMPNLTD, \
VPCMPNLED, \
VPCMPTRUED, \
VPCMPEQUD, \
VPCMPLTUD, \
VPCMPLEUD, \
VPCMPFALSEUD, \
VPCMPNEQUD, \
VPCMPNLTUD, \
VPCMPNLEUD, \
VPCMPTRUEUD, \
VPCMPEQQ, \
VPCMPLTQ, \
VPCMPLEQ, \
VPCMPFALSEQ, \
VPCMPNEQQ, \
VPCMPNLTQ, \
VPCMPNLEQ, \
VPCMPTRUEQ, \
VPCMPEQUQ, \
VPCMPLTUQ, \
VPCMPLEUQ, \
VPCMPFALSEUQ, \
VPCMPNEQUQ, \
VPCMPNLTUQ, \
VPCMPNLEUQ, \
VPCMPTRUEUQ, \
VPCMPGTW, \
VPCMPGTD, \
VPCMPGTQ, \
VPCMPGTB, \
VPMOVM2B, \
VPMOVM2W, \
VPMOVM2D, \
VPMOVM2Q, \
VPMOVB2M, \
VPMOVW2M, \
VPMOVD2M, \
VPMOVQ2M, \
VPBROADCASTMW2D, \
VPBROADCASTMB2Q, \
VPCONFLICTD, \
VPCONFLICTQ, \
VPMULTISHIFTQB, \
VLOADUNPACKLD, \
VLOADUNPACKLPS, \
VLOADUNPACKHD, \
VLOADUNPACKHPS, \
VLOADUNPACKLQ, \
VLOADUNPACKLPD, \
VLOADUNPACKHQ, \
VLOADUNPACKHPD, \
VPACKSTORELD, \
VPACKSTORELPS, \
VPACKSTOREHD, \
VPACKSTOREHPS, \
VPACKSTORELQ, \
VPACKSTORELPD, \
VPACKSTOREHQ, \
VPACKSTOREHPD, \
VCVTFXPNTUDQ2PS, \
VCVTFXPNTDQ2PS, \
VCVTFXPNTPS2UDQ, \
VCVTFXPNTPS2DQ, \
VCVTFXPNTPD2UDQ, \
VCVTFXPNTPD2DQ, \
VRNDFXPNTPS, \
VRNDFXPNTPD, \
VPERMF32X4, \
VPADCD, \
VPADDSETCD, \
VPSBBD, \
VPSUBRSETBD, \
VPSUBSETBD, \
VPSUBRD, \
VPSBBRD, \
VPMULHUD, \
VPMULHD, \
VFIXUPNANPS, \
VPMADD231D, \
VADDNPS, \
VADDNPD, \
VPADDSETSD, \
VGMAXABSPS, \
VGMINPS, \
VGMAXPS, \
VSUBRPS, \
VADDSETSPS, \
VSUBRPD, \
VGMINPD, \
VGMAXPD, \
VFIXUPNANPD, \
VLOG2PS, \
VEXP223PS, \
VRCP23PS, \
VRSQRT23PS, \
VPLZCNTD, \
VPLZCNTQ, \
VPMADD233D, \
VPREFETCHENTA, \
VPREFETCH0, \
VPREFETCH1, \
VPREFETCH2, \
VPREFETCHE0, \
VPREFETCHE1, \
VPREFETCHE2, \
VPREFETCHNTA, \
CLEVICT0, \
CLEVICT1, \
DELAY, \
SPFLT, \
TZCNTI, \

;
  ENDHEAD iiy ; End of module interface.
↑ VANDPS
Bitwise Logical AND of Packed Single-FP Values
Intel reference
VANDPS xmm1,xmm2, xmm3/m128 VEX.NDS.128.0F 54 /r
VANDPS ymm1, ymm2, ymm3/m256 VEX.NDS.256.0F 54 /r
VANDPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.0F.W0 54 /r
VANDPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.0F.W0 54 /r
VANDPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.0F.W0 54 /r
Category
sse1,simdfp,logical
Operands
Vps,Wps
Opcode
0x0F54 /r
CPU
P3+
Tested by
t5650
IiyVANDPS:: PROC
    IiEmitOpcode 0x54
.op:IiAllowModifier MASK
    IiAllowBroadcasting DWORD
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV32
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.0F, EVEX.NDS.128.0F.W0
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.0F, EVEX.NDS.256.0F.W0
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.0F.W0
    RET
  ENDP IiyVANDPS::
↑ VANDNPS
Bitwise Logical AND NOT of Packed Single-FP Values
Intel reference
VANDNPS xmm1, xmm2, xmm3/m128 VEX.NDS.128.0F 55 /r
VANDNPS ymm1, ymm2, ymm3/m256 VEX.NDS.256.0F 55 /r
VANDNPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.0F.W0 55 /r
VANDNPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.0F.W0 55 /r
VANDNPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.0F.W0 55 /r
Category
sse1,simdfp,logical
Operands
Vps,Wps
Opcode
0x0F55 /r
CPU
P3+
Tested by
t5650
IiyVANDNPS:: PROC
    IiEmitOpcode 0x55
    JMP IiyVANDPS.op:
  ENDP IiyVANDNPS::
↑ VORPS
Bitwise Logical OR of Single-FP Values
Intel reference
VORPS xmm1,xmm2, xmm3/m128 VEX.NDS.128.0F 56 /r
VORPS ymm1, ymm2, ymm3/m256 VEX.NDS.256.0F 56 /r
VORPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.0F.W0 56 /r
VORPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.0F.W0 56 /r
VORPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.0F.W0 56 /r
Category
sse1,simdfp,logical
Operands
Vps,Wps
Opcode
0x0F56 /r
CPU
P3+
Tested by
t5650
IiyVORPS:: PROC
    IiEmitOpcode 0x56
    JMP IiyVANDPS.op:
  ENDP IiyVORPS::
↑ VXORPS
Bitwise Logical XOR for Single-FP Values
Intel reference
VXORPS xmm1,xmm2, xmm3/m128 VEX.NDS.128.0F.WIG 57 /r
VXORPS ymm1, ymm2, ymm3/m256 VEX.NDS.256.0F.WIG 57 /r
VXORPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.0F.W0 57 /r
VXORPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.0F.W0 57 /r
VXORPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.0F.W0 57 /r
Category
sse1,simdfp,logical
Operands
Vps,Wps
Opcode
0x0F57 /r
CPU
P3+
Tested by
t5650
IiyVXORPS:: PROC
    IiEmitOpcode 0x57
    JMP IiyVANDPS.op:
  ENDP IiyVXORPS::
↑ VANDPD
Bitwise Logical AND of Packed Double-FP Values
Intel reference
VANDPD xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F 54 /r
VANDPD ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F 54 /r
VANDPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F.W1 54 /r
VANDPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F.W1 54 /r
VANDPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcstEVEX.NDS.512.66.0F.W1 54 /r
Category
sse2,pcksclr,logical
Operands
Vpd,Wpd
Opcode
0x660F54 /r
CPU
P4+
Tested by
t5652
IiyVANDPD:: PROC
    IiEmitOpcode 0x54
.op:IiAllowModifier MASK
    IiAllowBroadcasting QWORD
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV64
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F, EVEX.NDS.128.66.0F.W1
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F, EVEX.NDS.256.66.0F.W1
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F.W1
    RET
  ENDP IiyVANDPD::
↑ VANDNPD
Bitwise Logical AND NOT of Packed Double-FP Values
Intel reference
VANDNPD xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F 55 /r
VANDNPD ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F 55 /r
VANDNPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F.W1 55 /r
VANDNPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F.W1 55 /r
VANDNPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F.W1 55 /r
Category
sse2,pcksclr,logical
Operands
Vpd,Wpd
Opcode
0x660F55 /r
CPU
P4+
Tested by
t5652
IiyVANDNPD:: PROC
    IiEmitOpcode 0x55
    JMP IiyVANDPD.op:
  ENDP IiyVANDNPD::
↑ VORPD
Bitwise Logical OR of Double-FP Values
Intel reference
VORPD xmm1,xmm2, xmm3/m128 VEX.NDS.128.66.0F 56 /r
VORPD ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F 56 /r
VORPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F.W1 56 /r
VORPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F.W1 56 /r
VORPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F.W1 56 /r
Category
sse2,pcksclr,logical
Operands
Vpd,Wpd
Opcode
0x660F56 /r
CPU
P4+
Tested by
t5652
IiyVORPD:: PROC
    IiEmitOpcode 0x56
    JMP IiyVANDPD.op:
  ENDP IiyVORPD::
↑ VXORPD
Bitwise Logical XOR for Double-FP Values
Intel reference
VXORPD xmm1,xmm2, xmm3/m128 VEX.NDS.128.66.0F.WIG 57 /r
VXORPD ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F.WIG 57 /r
VXORPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F.W1 57 /r
VXORPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F.W1 57 /r
VXORPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F.W1 57 /r
Category
sse2,pcksclr,logical
Operands
Vpd,Wpd
Opcode
0x660F57 /r
CPU
P4+
Tested by
t5652
IiyVXORPD:: PROC
    IiEmitOpcode 0x57
    JMP IiyVANDPD.op:
  ENDP IiyVXORPD::
↑ VSQRTSS
Compute Square Root of Scalar Single-FP Value
Intel reference
VSQRTSS xmm1, xmm2, xmm3/m32 VEX.NDS.128.F3.0F.WIG 51 /r
VSQRTSS xmm1 {k1}{z}, xmm2, xmm3/m32{er} EVEX.NDS.LIG.F3.0F.W0 51 /r
Category
sse1,simdfp,arith
Operands
Vss,Wss
Opcode
0xF30F51 /r
CPU
P3+
Tested by
t5654
IiyVSQRTSS:: PROC
    IiAllowModifier MASK
    IiAllowRounding Register=xmm
    IiEmitOpcode 0x51
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX T1S32
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.F3.0F.WIG, EVEX.NDS.LIG.F3.0F.W0
    RET
  ENDP IiyVSQRTSS::
↑ VSQRTSD
Compute Square Root of Scalar Double-FP Value
Intel reference
VSQRTSD xmm1,xmm2, xmm3/m64 VEX.NDS.128.F2.0F.WIG 51 /r
VSQRTSD xmm1 {k1}{z}, xmm2, xmm3/m64{er} EVEX.NDS.LIG.F2.0F.W1 51 /r
Category
sse2,pcksclr,arith
Operands
Vsd,Wsd
Opcode
0xF20F51 /r
CPU
P4+
Tested by
t5654
IiyVSQRTSD:: PROC
    IiAllowModifier MASK
    IiAllowRounding Register=xmm
    IiEmitOpcode 0x51
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX T1S64
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.F2.0F.WIG, EVEX.NDS.LIG.F2.0F.W1
    RET
  ENDP IiyVSQRTSD::
↑ VSQRTPS
Compute Square Roots of Packed Single-FP Values
Intel reference
VSQRTPS xmm1, xmm2/m128 VEX.128.0F.WIG 51 /r
VSQRTPS ymm1, ymm2/m256 VEX.256.0F.WIG 51 /r
VSQRTPS xmm1 {k1}{z}, xmm2/m128/m32bcst EVEX.128.0F.W0 51 /r
VSQRTPS ymm1 {k1}{z}, ymm2/m256/m32bcst EVEX.256.0F.W0 51 /r
VSQRTPS zmm1 {k1}{z}, zmm2/m512/m32bcst{er} EVEX.512.0F.W0 51 /r
Category
sse1,simdfp,arith
Operands
Vps,Wps
Opcode
0x0F51 /r
CPU
P3+
Tested by
t5654
IiyVSQRTPS:: PROC
    IiAllowModifier MASK
    IiAllowRounding
    IiAllowBroadcasting DWORD
    IiEmitOpcode 0x51
    IiOpEn RM
    IiModRM /r
    IiDisp8EVEX FV32
    IiDispatchFormat  xmm.xmm, xmm.mem, ymm.ymm, ymm.mem, zmm.zmm, zmm.mem
.xmm.xmm:
.xmm.mem:
    IiEmitPrefix VEX.128.0F.WIG, EVEX.128.0F.W0
    RET
.ymm.ymm:
.ymm.mem:
    IiEmitPrefix VEX.256.0F.WIG, EVEX.256.0F.W0
    RET
.zmm.zmm:
.zmm.mem:
    IiEmitPrefix EVEX.512.0F.W0
    RET
  ENDP IiyVSQRTPS::
↑ VSQRTPD
Compute Square Roots of Packed Double-FP Values
Intel reference
VSQRTPD xmm1, xmm2/m128 VEX.128.66.0F.WIG 51 /r
VSQRTPD ymm1, ymm2/m256 VEX.256.66.0F.WIG 51 /r
VSQRTPD xmm1 {k1}{z}, xmm2/m128/m32bcst EVEX.128.66.0F.W1 51 /r
VSQRTPD ymm1 {k1}{z}, ymm2/m256/m32bcst EVEX.256.66.0F.W1 51 /r
VSQRTPD zmm1 {k1}{z}, zmm2/m512/m64bcst{er} EVEX.512.66.0F.W1 51 /r
Category
sse2,pcksclr,arith
Operands
Vpd,Wpd
Opcode
0x660F51 /r
CPU
P4+
Tested by
t5654
IiyVSQRTPD:: PROC
    IiAllowModifier MASK
    IiAllowRounding
    IiAllowBroadcasting QWORD
    IiEmitOpcode 0x51
    IiOpEn RM
    IiModRM /r
    IiDisp8EVEX FV64
    IiDispatchFormat  xmm.xmm, xmm.mem, ymm.ymm, ymm.mem, zmm.zmm, zmm.mem
.xmm.xmm:
.xmm.mem:
    IiEmitPrefix VEX.128.66.0F.WIG, EVEX.128.66.0F.W1
    RET
.ymm.ymm:
.ymm.mem:
    IiEmitPrefix VEX.256.66.0F.WIG, EVEX.256.66.0F.W1
    RET
.zmm.zmm:
.zmm.mem:
    IiEmitPrefix EVEX.512.66.0F.W1
    RET
  ENDP IiyVSQRTPD::
↑ VRCP14SS
Compute Approximate Reciprocal of Scalar Float32 Value
Intel reference
VRCP14SS xmm1 {k1}{z}, xmm2, xmm3/m32 EVEX.NDS.LIG.66.0F38.W0 4D /r
Opcode
0x4D
Tested by
t5656
IiyVRCP14SS:: PROC
    IiEmitOpcode 0x4D
.op:IiAllowModifier MASK
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX T1S32
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.NDS.LIG.66.0F38.W0
    RET
  ENDP IiyVRCP14SS::
↑ VRCP14SD
Compute Approximate Reciprocal of Scalar Float64 Value
Intel reference
VRCP14SD xmm1 {k1}{z}, xmm2, xmm3/m64 EVEX.NDS.LIG.66.0F38.W1 4D /r
Opcode
0x4D
Tested by
t5656
IiyVRCP14SD:: PROC
    IiEmitOpcode 0x4D
.op:IiAllowModifier MASK
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX T1S64
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.NDS.LIG.66.0F38.W1
    RET
  ENDP IiyVRCP14SD::
↑ VRCP14PS
Compute Approximate Reciprocals of Packed Float32 Values
Intel reference
VRCP14PS xmm1 {k1}{z}, xmm2/m128/m32bcst EVEX.128.66.0F38.W0 4C /r
VRCP14PS ymm1 {k1}{z}, ymm2/m256/m32bcst EVEX.256.66.0F38.W0 4C /r
VRCP14PS zmm1 {k1}{z}, zmm2/m512/m32bcst EVEX.512.66.0F38.W0 4C /r
Operands
0x4C
Tested by
t5656
IiyVRCP14PS:: PROC
    IiEmitOpcode 0x4C
.op:IiAllowModifier MASK
    IiAllowBroadcasting DWORD
    IiOpEn RM
    IiModRM /r
    IiDisp8EVEX FV32
    IiDispatchFormat  xmm.xmm, xmm.mem, ymm.ymm, ymm.mem, zmm.zmm, zmm.mem
.xmm.xmm:
.xmm.mem:
    IiEmitPrefix EVEX.128.66.0F38.W0
    RET
.ymm.ymm:
.ymm.mem:
    IiEmitPrefix EVEX.256.66.0F38.W0
    RET
.zmm.zmm:
.zmm.mem:
    IiEmitPrefix EVEX.512.66.0F38.W0
    RET
  ENDP IiyVRCP14PS::
↑ VRCP14PD
Compute Approximate Reciprocals of Packed Float64 Values
Intel reference
VRCP14PD xmm1 {k1}{z}, xmm2/m128/m64bcst EVEX.128.66.0F38.W1 4C /r
VRCP14PD ymm1 {k1}{z}, ymm2/m256/m64bcst EVEX.256.66.0F38.W1 4C /r
VRCP14PD zmm1 {k1}{z}, zmm2/m512/m64bcst EVEX.512.66.0F38.W1 4C /r
Opcode
0x4C
Tested by
t5656
IiyVRCP14PD:: PROC
    IiEmitOpcode 0x4C
.op:IiAllowModifier MASK
    IiAllowBroadcasting QWORD
    IiOpEn RM
    IiModRM /r
    IiDisp8EVEX FV64
    IiDispatchFormat  xmm.xmm, xmm.mem, ymm.ymm, ymm.mem, zmm.zmm, zmm.mem
.xmm.xmm:
.xmm.mem:
    IiEmitPrefix EVEX.128.66.0F38.W1
    RET
.ymm.ymm:
.ymm.mem:
    IiEmitPrefix EVEX.256.66.0F38.W1
    RET
.zmm.zmm:
.zmm.mem:
    IiEmitPrefix EVEX.512.66.0F38.W1
    RET
  ENDP IiyVRCP14PD::
↑ VRSQRT14SS
Compute Approximate Reciprocal of Square Root of Scalar Float32 Value
Intel reference
VRSQRT14SS xmm1 {k1}{z}, xmm2, xmm3/m32 EVEX.NDS.LIG.66.0F38.W0 4F /r
Opcode
0x4F
Tested by
t5658
IiyVRSQRT14SS:: PROC
    IiEmitOpcode 0x4F
    JMP IiyVRCP14SS.op:
  ENDP IiyVRSQRT14SS::
↑ VRSQRT14SD
Compute Approximate Reciprocal of Square Root of Scalar Float64 Value
Intel reference
VRSQRT14SD xmm1 {k1}{z}, xmm2, xmm3/m64 EVEX.NDS.LIG.66.0F38.W1 4F /r
Opcode
0x4F
Tested by
t5658
IiyVRSQRT14SD:: PROC
    IiEmitOpcode 0x4F
    JMP IiyVRCP14SD.op:
  ENDP IiyVRSQRT14SD::
↑ VRSQRT14PS
Compute Approximate Reciprocals of Square Roots of Packed Float32 Values
Intel reference
VRSQRT14PS xmm1 {k1}{z}, xmm2/m128/m32bcst EVEX.128.66.0F38.W0 4E /r
VRSQRT14PS ymm1 {k1}{z}, ymm2/m256/m32bcst EVEX.256.66.0F38.W0 4E /r
VRSQRT14PS zmm1 {k1}{z}, zmm2/m512/m32bcst EVEX.512.66.0F38.W0 4E /r
Opcode
0x4E
Tested by
t5658
IiyVRSQRT14PS:: PROC
    IiEmitOpcode 0x4E
    JMP IiyVRCP14PS.op:
  ENDP IiyVRSQRT14PS::
↑ VRSQRT14PD
Compute Approximate Reciprocals of Square Roots of Packed Float64 Values
Intel reference
VRSQRT14PD xmm1 {k1}{z}, xmm2/m128/m64bcst EVEX.128.66.0F38.W1 4E /r
VRSQRT14PD ymm1 {k1}{z}, ymm2/m256/m64bcst EVEX.256.66.0F38.W1 4E /r
VRSQRT14PD zmm1 {k1}{z}, zmm2/m512/m64bcst EVEX.512.66.0F38.W1 4E /r
Opcode
0x4E
Tested by
t5658
IiyVRSQRT14PD:: PROC
    IiEmitOpcode 0x4E
    JMP IiyVRCP14PD.op:
  ENDP IiyVRSQRT14PD::
↑ VRCP28SS
Approximation to the Reciprocal of Scalar Single-Precision Floating-Point Value with Less Than 2^-28 Relative Error
Intel reference
VRCP28SS xmm1 {k1}{z}, xmm2, xmm3/m32 {sae} EVEX.NDS.LIG.66.0F38.W0 CB /r
Opcode
0xCB
Tested by
t5660
IiyVRCP28SS:: PROC
    IiEmitOpcode 0xCB
.op:IiAllowModifier MASK
    IiAllowSuppressing Register=xmm
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX T1S32
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.NDS.LIG.66.0F38.W0
    RET
  ENDP IiyVRCP28SS::
↑ VRCP28SD
Approximation to the Reciprocal of Scalar Double-Precision Floating-Point Value with Less Than 2^-28 Relative Error
Intel reference
VRCP28SD xmm1 {k1}{z}, xmm2, xmm3/m64 {sae} EVEX.NDS.LIG.66.0F38.W1 CB /r
Opcode
0xCB
Tested by
t5660
IiyVRCP28SD:: PROC
    IiEmitOpcode 0xCB
.op:IiAllowModifier MASK
    IiAllowSuppressing Register=xmm
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX T1S64
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.NDS.LIG.66.0F38.W1
    RET
  ENDP IiyVRCP28SD::
↑ VRCP28PS
Approximation to the Reciprocal of Packed Single-Precision Floating-Point Values with Less Than 2^-28 Relative Error
Intel reference
VRCP28PS zmm1 {k1}{z}, zmm2/m512/m32bcst {sae} EVEX.512.66.0F38.W0 CA /r
Opcode
0xCA
Tested by
t5660
IiyVRCP28PS:: PROC
    IiEmitOpcode 0xCA
.op:IiAllowModifier MASK
    IiAllowSuppressing 
    IiAllowBroadcasting DWORD
    IiOpEn RM
    IiModRM /r
    IiDisp8EVEX FV32
    IiDispatchFormat  zmm.zmm, zmm.mem
.zmm.zmm:
.zmm.mem:
    IiEmitPrefix EVEX.512.66.0F38.W0
    RET
  ENDP IiyVRCP28PS::
↑ VRCP28PD
Approximation to the Reciprocal of Packed Double-Precision Floating-Point Values with Less Than 2^-28 Relative Error
Intel reference
VRCP28PD zmm1 {k1}{z}, zmm2/m512/m64bcst {sae}EVEX.512.66.0F38.W1 CA /r
Opcode
0xCA
Tested by
t5660
IiyVRCP28PD:: PROC
    IiEmitOpcode 0xCA
.op:IiAllowModifier MASK
    IiAllowSuppressing
    IiAllowBroadcasting QWORD
    IiOpEn RM
    IiModRM /r
    IiDisp8EVEX FV64
    IiDispatchFormat  zmm.zmm, zmm.mem
.zmm.zmm:
.zmm.mem:
    IiEmitPrefix EVEX.512.66.0F38.W1
    RET
  ENDP IiyVRCP28PD::
↑ VRSQRT28SS
Approximation to the Reciprocal Square Root of Scalar Single-Precision Floating- Point Value with Less Than 2^-28 Relative Error
Intel reference
VRSQRT28SS xmm1 {k1}{z}, xmm2, xmm3/m32 {sae} EVEX.NDS.LIG.66.0F38.W0 CD /r
Opcode
0xCD
Tested by
t5662
IiyVRSQRT28SS:: PROC
    IiEmitOpcode 0xCD
    JMP IiyVRCP28SS.op:
  ENDP IiyVRSQRT28SS::
↑ VRSQRT28SD
Approximation to the Reciprocal Square Root of Scalar Double-Precision Floating-Point Value with Less Than 2^-28 Relative Error
Intel reference
VRSQRT28SD xmm1 {k1}{z}, xmm2, xmm3/m64 {sae} EVEX.NDS.LIG.66.0F38.W1 CD /r
Opcode
0xCD
Tested by
t5662
IiyVRSQRT28SD:: PROC
    IiEmitOpcode 0xCD
    JMP IiyVRCP28SD.op:
  ENDP IiyVRSQRT28SD::
↑ VRSQRT28PS
Approximation to the Reciprocal Square Root of Packed Single-Precision Floating-Point Values with Less Than 2^-28 Relative Error
Intel reference
VRSQRT28PS zmm1 {k1}{z}, zmm2/m512/m32bcst {sae} EVEX.512.66.0F38.W0 CC /r
Opcode
0xCC
Tested by
t5662
IiyVRSQRT28PS:: PROC
    IiEmitOpcode 0xCC
    JMP IiyVRCP28PS.op:
  ENDP IiyVRSQRT28PS::
↑ VRSQRT28PD
Approximation to the Reciprocal Square Root of Packed Double-Precision Floating-Point Values with Less Than 2^-28 Relative Error
Intel reference
VRSQRT28PD zmm1 {k1}{z}, zmm2/m512/m64bcst {sae} EVEX.512.66.0F38.W1 CC /r
Opcode
0xCC
Tested by
t5662
IiyVRSQRT28PD:: PROC
    IiEmitOpcode 0xCC
    JMP IiyVRCP28PD.op:
  ENDP IiyVRSQRT28PD::
↑ VEXP2PS
Approximation to the Exponential 2^x of Packed Single-Precision Floating-Point Values with Less Than 2^-23 Relative Error
Intel reference
VEXP2PS zmm1 {k1}{z}, zmm2/m512/m32bcst {sae}EVEX.512.66.0F38.W0 C8 /r
Opcode
0xC8
Tested by
t5664
IiyVEXP2PS:: PROC
    IiEmitOpcode 0xC8
    JMP IiyVRCP28PS.op:
  ENDP IiyVEXP2PS::
↑ VEXP2PD
Approximation to the Exponential 2^x of Packed Double-Precision Floating-Point Values with Less Than 2^-23 Relative Error
Intel reference
VEXP2PD zmm1 {k1}{z}, zmm2/m512/m64bcst {sae}EVEX.512.66.0F38.W1 C8 /r
Opcode
0xC8
Tested by
t5664
IiyVEXP2PD:: PROC
    IiEmitOpcode 0xC8
    JMP IiyVRCP28PD.op:
  ENDP IiyVEXP2PD::
↑ VPMOVUSWB
Down Convert Unsigned Word to Byte using unsigned saturation
Intel reference
VPMOVUSWB xmm1/m64 {k1}{z}, xmm2 EVEX.128.F3.0F38.W0 10 /r
VPMOVUSWB xmm1/m128 {k1}{z}, ymm2 EVEX.256.F3.0F38.W0 10 /r
VPMOVUSWB ymm1/m256 {k1}{z}, zmm2 EVEX.512.F3.0F38.W0 10 /r
Opcode
0x10
Tested by
t5670
IiyVPMOVUSWB:: PROC
     IiEmitOpcode 0x10
.HVM:IiDisp8EVEX HVM
     IiAllowModifier MASK
     IiOpEn MR
     IiModRM /r
     IiDispatchFormat  xmm.xmm, mem.xmm, xmm.ymm, mem.ymm, ymm.zmm, mem.zmm
.xmm.xmm:
.mem.xmm:
    IiEncoding DATA=QWORD
    IiEmitPrefix EVEX.128.F3.0F38.W0
    RET
.xmm.ymm:
.mem.ymm:
    IiEncoding DATA=OWORD
    IiEmitPrefix EVEX.256.F3.0F38.W0
    RET
.ymm.zmm:
.mem.zmm:
    IiEncoding DATA=YWORD
    IiEmitPrefix EVEX.512.F3.0F38.W0
    RET
  ENDP IiyVPMOVUSWB::
↑ VPMOVUSDB
Down Convert Unsigned DWord to Byte using unsigned saturation
Intel reference
VPMOVUSDB xmm1/m32 {k1}{z}, xmm2 EVEX.128.F3.0F38.W0 11 /r
VPMOVUSDB xmm1/m64 {k1}{z}, ymm2 EVEX.256.F3.0F38.W0 11 /r
VPMOVUSDB xmm1/m128 {k1}{z}, zmm2 EVEX.512.F3.0F38.W0 11 /r
Opcode
0x11
Tested by
t5670
IiyVPMOVUSDB:: PROC
     IiEmitOpcode 0x11
.QVM:IiDisp8EVEX QVM
     IiAllowModifier MASK
     IiOpEn MR
     IiModRM /r
     IiDispatchFormat  xmm.xmm, mem.xmm, xmm.ymm, mem.ymm, xmm.zmm, mem.zmm
.xmm.xmm:
.mem.xmm:
    IiEncoding DATA=DWORD
    IiEmitPrefix EVEX.128.F3.0F38.W0
    RET
.xmm.ymm:
.mem.ymm:
    IiEncoding DATA=QWORD
    IiEmitPrefix EVEX.256.F3.0F38.W0
    RET
.xmm.zmm:
.mem.zmm:
    IiEncoding DATA=OWORD
    IiEmitPrefix EVEX.512.F3.0F38.W0
    RET
  ENDP IiyVPMOVUSDB::
↑ VPMOVUSQB
Down Convert Unsigned QWord to Byte using unsigned saturation
Intel reference
VPMOVUSQB xmm1/m16 {k1}{z}, xmm2 EVEX.128.F3.0F38.W0 12 /r
VPMOVUSQB xmm1/m32 {k1}{z}, ymm2 EVEX.256.F3.0F38.W0 12 /r
VPMOVUSQB xmm1/m64 {k1}{z}, zmm2 EVEX.512.F3.0F38.W0 12 /r
Opcode
0x12
Tested by
t5670
IiyVPMOVUSQB:: PROC
     IiEmitOpcode 0x12
.OVM:IiDisp8EVEX OVM
     IiAllowModifier MASK
     IiOpEn MR
     IiModRM /r
     IiDispatchFormat  xmm.xmm, mem.xmm, xmm.ymm, mem.ymm, xmm.zmm, mem.zmm
.xmm.xmm:
.mem.xmm:
    IiEncoding DATA=WORD
    IiEmitPrefix EVEX.128.F3.0F38.W0
    RET
.xmm.ymm:
.mem.ymm:
    IiEncoding DATA=DWORD
    IiEmitPrefix EVEX.256.F3.0F38.W0
    RET
.xmm.zmm:
.mem.zmm:
    IiEncoding DATA=QWORD
    IiEmitPrefix EVEX.512.F3.0F38.W0
    RET
  ENDP IiyVPMOVUSQB::
↑ VPMOVUSDW
Down Convert Unsigned DWord to Word using unsigned saturation
Intel reference
VPMOVUSDW xmm1/m64 {k1}{z}, xmm2 EVEX.128.F3.0F38.W0 13 /r
VPMOVUSDW xmm1/m128 {k1}{z}, ymm2 EVEX.256.F3.0F38.W0 13 /r
VPMOVUSDW ymm1/m256 {k1}{z}, zmm2 EVEX.512.F3.0F38.W0 13 /r
Opcode
0x13
Tested by
t5670
IiyVPMOVUSDW:: PROC
    IiEmitOpcode 0x13
    JMP IiyVPMOVUSWB.HVM:
  ENDP IiyVPMOVUSDW::
↑ VPMOVUSQW
Down Convert Unsigned QWord to Word using unsigned saturation
Intel reference
VPMOVUSQW xmm1/m32 {k1}{z}, xmm2 EVEX.128.F3.0F38.W0 14 /r
VPMOVUSQW xmm1/m64 {k1}{z}, ymm2 EVEX.256.F3.0F38.W0 14 /r
VPMOVUSQW xmm1/m128 {k1}{z}, zmm2 EVEX.512.F3.0F38.W0 14 /r
Opcode
0x14
Tested by
t5670
IiyVPMOVUSQW:: PROC
    IiEmitOpcode 0x14
    JMP IiyVPMOVUSDB.QVM:
  ENDP IiyVPMOVUSQW::
↑ VPMOVUSQD
Down Convert Unsigned QWord to DWord using unsigned saturation
Intel reference
VPMOVUSQD xmm1/m64 {k1}{z}, xmm2 EVEX.128.F3.0F38.W0 15 /r
VPMOVUSQD xmm1/m128 {k1}{z}, ymm2 EVEX.256.F3.0F38.W0 15 /r
VPMOVUSQD ymm1/m256 {k1}{z}, zmm2 EVEX.512.F3.0F38.W0 15 /r
Opcode
0x15
Tested by
t5670
IiyVPMOVUSQD:: PROC
    IiEmitOpcode 0x15
    JMP IiyVPMOVUSWB.HVM:
  ENDP IiyVPMOVUSQD::
↑ VPMOVSWB
Down Convert Signed Word to Byte using signed saturation
Intel reference
VPMOVSWB xmm1/m64 {k1}{z}, xmm2 EVEX.128.F3.0F38.W0 20 /r
VPMOVSWB xmm1/m128 {k1}{z}, ymm2 EVEX.256.F3.0F38.W0 20 /r
VPMOVSWB ymm1/m256 {k1}{z}, zmm2 EVEX.512.F3.0F38.W0 20 /r
Opcode
0x20
Tested by
t5672
IiyVPMOVSWB:: PROC
    IiEmitOpcode 0x20
    JMP IiyVPMOVUSWB.HVM:
  ENDP IiyVPMOVSWB::
↑ VPMOVSDB
Down Convert Signed DWord to Byte using signed saturation
Intel reference
VPMOVSDB xmm1/m32 {k1}{z}, xmm2 EVEX.128.F3.0F38.W0 21 /r
VPMOVSDB xmm1/m64 {k1}{z}, ymm2 EVEX.256.F3.0F38.W0 21 /r
VPMOVSDB xmm1/m128 {k1}{z}, zmm2 EVEX.512.F3.0F38.W0 21 /r
Opcode
0x21
Tested by
t5672
IiyVPMOVSDB:: PROC
    IiEmitOpcode 0x21
    JMP IiyVPMOVUSDB.QVM:
  ENDP IiyVPMOVSDB::
↑ VPMOVSQB
Down Convert Signed QWord to Byte using signed saturation
Intel reference
VPMOVSQB xmm1/m16 {k1}{z}, xmm2 EVEX.128.F3.0F38.W0 22 /r
VPMOVSQB xmm1/m32 {k1}{z}, ymm2 EVEX.256.F3.0F38.W0 22 /r
VPMOVSQB xmm1/m64 {k1}{z}, zmm2 EVEX.512.F3.0F38.W0 22 /r
Opcode
0x22
Tested by
t5672
IiyVPMOVSQB:: PROC
    IiEmitOpcode 0x22
    JMP IiyVPMOVUSQB.OVM:
  ENDP IiyVPMOVSQB::
↑ VPMOVSDW
Down Convert Signed DWord to Word using signed saturation
Intel reference
VPMOVSDW xmm1/m64 {k1}{z}, xmm2 EVEX.128.F3.0F38.W0 23 /r
VPMOVSDW xmm1/m128 {k1}{z}, ymm2 EVEX.256.F3.0F38.W0 23 /r
VPMOVSDW ymm1/m256 {k1}{z}, zmm2 EVEX.512.F3.0F38.W0 23 /r
Opcode
0x23
Tested by
t5672
IiyVPMOVSDW:: PROC
    IiEmitOpcode 0x23
    JMP IiyVPMOVUSWB.HVM:
  ENDP IiyVPMOVSDW::
↑ VPMOVSQW
Down Convert Signed QWord to Word using signed saturation
Intel reference
VPMOVSQW xmm1/m32 {k1}{z}, xmm2 EVEX.128.F3.0F38.W0 24 /r
VPMOVSQW xmm1/m64 {k1}{z}, ymm2 EVEX.256.F3.0F38.W0 24 /r
VPMOVSQW xmm1/m128 {k1}{z}, zmm2 EVEX.512.F3.0F38.W0 24 /r
Opcode
0x24
Tested by
t5672
IiyVPMOVSQW:: PROC
    IiEmitOpcode 0x24
    JMP IiyVPMOVUSDB.QVM:
  ENDP IiyVPMOVSQW::
↑ VPMOVSQD
Down Convert Signed QWord to DWord using signed saturation
Intel reference
VPMOVSQD xmm1/m64 {k1}{z}, xmm2 EVEX.128.F3.0F38.W0 25 /r
VPMOVSQD xmm1/m128 {k1}{z}, ymm2 EVEX.256.F3.0F38.W0 25 /r
VPMOVSQD ymm1/m256 {k1}{z}, zmm2 EVEX.512.F3.0F38.W0 25 /r
Opcode
0x25
Tested by
t5672
IiyVPMOVSQD:: PROC
    IiEmitOpcode 0x25
    JMP IiyVPMOVUSWB.HVM:
  ENDP IiyVPMOVSQD::
↑ VPMOVWB
Down Convert Word to Byte with truncation
Intel reference
VPMOVWB xmm1/m64 {k1}{z}, xmm2 EVEX.128.F3.0F38.W0 30 /r
VPMOVWB xmm1/m128 {k1}{z}, ymm2 EVEX.256.F3.0F38.W0 30 /r
VPMOVWB ymm1/m256 {k1}{z}, zmm2 EVEX.512.F3.0F38.W0 30 /r
Opcode
0x30
Tested by
t5674
IiyVPMOVWB:: PROC
    IiEmitOpcode 0x30
    JMP IiyVPMOVUSWB.HVM:
  ENDP IiyVPMOVWB::
↑ VPMOVDB
Down Convert DWord to Byte with truncation
Intel reference
VPMOVDB xmm1/m32 {k1}{z}, xmm2 EVEX.128.F3.0F38.W0 31 /r
VPMOVDB xmm1/m64 {k1}{z}, ymm2 EVEX.256.F3.0F38.W0 31 /r
VPMOVDB xmm1/m128 {k1}{z}, zmm2 EVEX.512.F3.0F38.W0 31 /r
Opcode
0x31
Tested by
t5674
IiyVPMOVDB:: PROC
    IiEmitOpcode 0x31
    JMP IiyVPMOVUSDB.QVM:
  ENDP IiyVPMOVDB::
↑ VPMOVQB
Down Convert QWord to Byte with truncation
Intel reference
VPMOVQB xmm1/m16 {k1}{z}, xmm2 EVEX.128.F3.0F38.W0 32 /r
VPMOVQB xmm1/m32 {k1}{z}, ymm2 EVEX.256.F3.0F38.W0 32 /r
VPMOVQB xmm1/m64 {k1}{z}, zmm2 EVEX.512.F3.0F38.W0 32 /r
Opcode
0x32
Tested by
t5674
IiyVPMOVQB:: PROC
    IiEmitOpcode 0x32
    JMP IiyVPMOVUSQB.OVM:
  ENDP IiyVPMOVQB::
↑ VPMOVDW
Down Convert DWord to Word with truncation
Intel reference
VPMOVDW xmm1/m64 {k1}{z}, xmm2 EVEX.128.F3.0F38.W0 33 /r
VPMOVDW xmm1/m128 {k1}{z}, ymm2 EVEX.256.F3.0F38.W0 33 /r
VPMOVDW ymm1/m256 {k1}{z}, zmm2 EVEX.512.F3.0F38.W0 33 /r
Opcode
0x33
Tested by
t5674
IiyVPMOVDW:: PROC
    IiEmitOpcode 0x33
    JMP IiyVPMOVUSWB.HVM:
  ENDP IiyVPMOVDW::
↑ VPMOVQW
Down Convert QWord to Word with truncation
Intel reference
VPMOVQW xmm1/m32 {k1}{z}, xmm2 EVEX.128.F3.0F38.W0 34 /r
VPMOVQW xmm1/m64 {k1}{z}, ymm2 EVEX.256.F3.0F38.W0 34 /r
VPMOVQW xmm1/m128 {k1}{z}, zmm2 EVEX.512.F3.0F38.W0 34 /r
Opcode
0x34
Tested by
t5674
IiyVPMOVQW:: PROC
    IiEmitOpcode 0x34
    JMP IiyVPMOVUSDB.QVM:
  ENDP IiyVPMOVQW::
↑ VPMOVQD
Down Convert QWord to DWord with truncation
Intel reference
VPMOVQD xmm1/m128 {k1}{z}, xmm2 EVEX.128.F3.0F38.W0 35 /r
VPMOVQD xmm1/m128 {k1}{z}, ymm2 EVEX.256.F3.0F38.W0 35 /r
VPMOVQD ymm1/m256 {k1}{z}, zmm2 EVEX.512.F3.0F38.W0 35 /r
Opcode
0x35
Tested by
t5674
IiyVPMOVQD:: PROC
    IiEmitOpcode 0x35
    JMP IiyVPMOVUSWB.HVM:
  ENDP IiyVPMOVQD::
↑ VPMOVSXBW
Packed Move with Sign Extend
Intel reference
VPMOVSXBW xmm1, xmm2/m64 VEX.128.66.0F38.WIG 20 /r
VPMOVSXBW ymm1, xmm2/m128 VEX.256.66.0F38.WIG 20 /r
VPMOVSXBW xmm1 {k1}{z}, xmm2/m64 EVEX.128.66.0F38.WIG 20 /r
VPMOVSXBW ymm1 {k1}{z}, xmm2/m128 EVEX.256.66.0F38.WIG 20 /r
VPMOVSXBW zmm1 {k1}{z}, ymm2/m256 EVEX.512.66.0F38.WIG 20 /r
Category
sse41,simdint,conver
Operands
Vdq,Mq | Vdq,Udq
Opcode
0x660F3820 /r | 0x660F3820 /r
CPU
C2++
Documented
D43
Tested by
t5676
IiyVPMOVSXBW:: PROC
     IiEmitOpcode 0x20
.HVM:IiDisp8EVEX HVM
     IiAllowModifier MASK
     IiOpEn RM
     IiModRM /r
     IiDispatchFormat  xmm.xmm, xmm.mem, ymm.xmm, ymm.mem, zmm.ymm, zmm.mem
.xmm.xmm:
.xmm.mem:
    IiEncoding DATA=QWORD
    IiEmitPrefix VEX.128.66.0F38.WIG, EVEX.128.66.0F38.WIG
    RET
.ymm.xmm:
.ymm.mem:
    IiEncoding DATA=OWORD
    IiEmitPrefix VEX.256.66.0F38.WIG, EVEX.256.66.0F38.WIG
    RET
.zmm.ymm:
.zmm.mem:
    IiEncoding DATA=YWORD
    IiEmitPrefix EVEX.512.66.0F38.WIG
    RET
  ENDP IiyVPMOVSXBW::
↑ VPMOVSXBD
Packed Move with Sign Extend
Intel reference
VPMOVSXBD xmm1, xmm2/m32 VEX.128.66.0F38.WIG 21 /r
VPMOVSXBD ymm1, xmm2/m64 VEX.256.66.0F38.WIG 21 /r
VPMOVSXBD xmm1 {k1}{z}, xmm2/m32 EVEX.128.66.0F38.WIG 21 /r
VPMOVSXBD ymm1 {k1}{z}, xmm2/m64 EVEX.256.66.0F38.WIG 21 /r
VPMOVSXBD zmm1 {k1}{z}, xmm2/m128 EVEX.512.66.0F38.WIG 21 /r
Category
sse41,simdint,conver
Operands
Vdq,Md | Vdq,Udq
Opcode
0x660F3821 /r | 0x660F3821 /r
CPU
C2++
Documented
D43
Tested by
t5676
IiyVPMOVSXBD:: PROC
     IiEmitOpcode 0x21
.QVM:IiDisp8EVEX QVM
     IiAllowModifier MASK
     IiOpEn RM
     IiModRM /r
     IiDispatchFormat  xmm.xmm, xmm.mem, ymm.xmm, ymm.mem, zmm.xmm, zmm.mem
.xmm.xmm:
.xmm.mem:
    IiEncoding DATA=DWORD
    IiEmitPrefix VEX.128.66.0F38.WIG, EVEX.128.66.0F38.WIG
    RET
.ymm.xmm:
.ymm.mem:
    IiEncoding DATA=QWORD
    IiEmitPrefix VEX.256.66.0F38.WIG, EVEX.256.66.0F38.WIG
    RET
.zmm.xmm:
.zmm.mem:
    IiEncoding DATA=OWORD
    IiEmitPrefix EVEX.512.66.0F38.WIG
    RET
  ENDP IiyVPMOVSXBD::
↑ VPMOVSXBQ
Packed Move with Sign Extend
Intel reference
VPMOVSXBQ xmm1, xmm2/m16 VEX.128.66.0F38.WIG 22 /r
VPMOVSXBQ ymm1, xmm2/m32 VEX.256.66.0F38.WIG 22 /r
VPMOVSXBQ xmm1 {k1}{z}, xmm2/m16 EVEX.128.66.0F38.WIG 22 /r
VPMOVSXBQ ymm1 {k1}{z}, xmm2/m32 EVEX.256.66.0F38.WIG 22 /r
VPMOVSXBQ zmm1 {k1}{z}, xmm2/m64 EVEX.512.66.0F38.WIG 22 /r
Category
sse41,simdint,conver
Operands
Vdq,Mw | Vdq,Udq
Opcode
0x660F3822 /r | 0x660F3822 /r
CPU
C2++
Documented
D43
Tested by
t5676
IiyVPMOVSXBQ:: PROC
     IiEmitOpcode 0x22
.OVM:IiDisp8EVEX OVM
     IiAllowModifier MASK
     IiOpEn RM
     IiModRM /r
     IiDispatchFormat  xmm.xmm, xmm.mem, ymm.xmm, ymm.mem, zmm.xmm, zmm.mem
.xmm.xmm:
.xmm.mem:
    IiEncoding DATA=WORD
    IiEmitPrefix VEX.128.66.0F38.WIG, EVEX.128.66.0F38.WIG
    RET
.ymm.xmm:
.ymm.mem:
    IiEncoding DATA=DWORD
    IiEmitPrefix VEX.256.66.0F38.WIG, EVEX.256.66.0F38.WIG
    RET
.zmm.xmm:
.zmm.mem:
    IiEncoding DATA=QWORD
    IiEmitPrefix EVEX.512.66.0F38.WIG
    RET
  ENDP IiyVPMOVSXBQ::
↑ VPMOVSXWD
Packed Move with Sign Extend
Intel reference
VPMOVSXWD xmm1, xmm2/m64 VEX.128.66.0F38.WIG 23 /r
VPMOVSXWD ymm1, xmm2/m128 VEX.256.66.0F38.WIG 23 /r
VPMOVSXWD xmm1 {k1}{z}, xmm2/m64 EVEX.128.66.0F38.WIG 23 /r
VPMOVSXWD ymm1 {k1}{z}, xmm2/m128 EVEX.256.66.0F38.WIG 23 /r
VPMOVSXWD zmm1 {k1}{z}, ymm2/m256 EVEX.512.66.0F38.WIG 23 /r
Category
sse41,simdint,conver
Operands
Vdq,Mq | Vdq,Udq
Opcode
0x660F3823 /r | 0x660F3823 /r
CPU
C2++
Documented
D43
Tested by
t5676
IiyVPMOVSXWD:: PROC
     IiEmitOpcode 0x23
     JMP IiyVPMOVSXBW.HVM:
  ENDP IiyVPMOVSXWD::
↑ VPMOVSXWQ
Packed Move with Sign Extend
Intel reference
VPMOVSXWQ xmm1, xmm2/m32 VEX.128.66.0F38.WIG 24 /r
VPMOVSXWQ ymm1, xmm2/m64 VEX.256.66.0F38.WIG 24 /r
VPMOVSXWQ xmm1 {k1}{z}, xmm2/m32 EVEX.128.66.0F38.WIG 24 /r
VPMOVSXWQ ymm1 {k1}{z}, xmm2/m64 EVEX.256.66.0F38.WIG 24 /r
VPMOVSXWQ zmm1 {k1}{z}, xmm2/m128 EVEX.512.66.0F38.WIG 24 /r
Category
sse41,simdint,conver
Operands
Vdq,Md | Vdq,Udq
Opcode
0x660F3824 /r | 0x660F3824 /r
CPU
C2++
Documented
D43
Tested by
t5676
IiyVPMOVSXWQ:: PROC
    IiEmitOpcode 0x24
    JMP IiyVPMOVSXBD.QVM:
  ENDP IiyVPMOVSXWQ::
↑ VPMOVSXDQ
Packed Move with Sign Extend
Intel reference
VPMOVSXDQ xmm1, xmm2/m64 VEX.128.66.0F38.WIG 25 /r
VPMOVSXDQ ymm1, xmm2/m128 VEX.256.66.0F38.WIG 25 /r
VPMOVSXDQ xmm1 {k1}{z}, xmm2/m64 EVEX.128.66.0F38.W0 25 /r
VPMOVSXDQ ymm1 {k1}{z}, xmm2/m128 EVEX.256.66.0F38.W0 25 /r
VPMOVSXDQ zmm1 {k1}{z}, ymm2/m256 EVEX.512.66.0F38.W0 25 /r
Category
sse41,simdint,conver
Operands
Vdq,Mq | Vdq,Udq
Opcode
0x660F3825 /r | 0x660F3825 /r
CPU
C2++
Documented
D43
Tested by
t5676
IiyVPMOVSXDQ:: PROC
    IiEmitOpcode 0x25
    JMP IiyVPMOVSXBW.HVM:
  ENDP IiyVPMOVSXDQ::
↑ VPMOVZXBW
Packed Move with Zero Extend
Intel reference
VPMOVZXBW xmm1, xmm2/m64 VEX.128.66.0F38.WIG 30 /r
VPMOVZXBW ymm1, xmm2/m128 VEX.256.66.0F38.WIG 30 /r
VPMOVZXBW xmm1 {k1}{z}, xmm2/m64 EVEX.128.66.0F38.WIG 30 /r
VPMOVZXBW ymm1 {k1}{z}, xmm2/m128 EVEX.256.66.0F38.WIG 30 /r
VPMOVZXBW zmm1 {k1}{z}, ymm2/m256 EVEX.512.66.0F38.WIG 30 /r
Category
sse41,simdint,conver
Operands
Vdq,Mq | Vdq,Udq
Opcode
0x660F3830 /r | 0x660F3830 /r
CPU
C2++
Documented
D43
Tested by
t5678
IiyVPMOVZXBW:: PROC
    IiEmitOpcode 0x30
    JMP IiyVPMOVSXBW.HVM:
  ENDP IiyVPMOVZXBW::
↑ VPMOVZXBD
Packed Move with Zero Extend
Intel reference
VPMOVZXBD xmm1, xmm2/m32 VEX.128.66.0F38.WIG 31 /r
VPMOVZXBD ymm1, xmm2/m64 VEX.256.66.0F38.WIG 31 /r
VPMOVZXBD xmm1 {k1}{z}, xmm2/m32 EVEX.128.66.0F38.WIG 31 /r
VPMOVZXBD ymm1 {k1}{z}, xmm2/m64 EVEX.256.66.0F38.WIG 31 /r
VPMOVZXBD zmm1 {k1}{z}, xmm2/m128 EVEX.512.66.0F38.WIG 31 /r
Category
sse41,simdint,conver
Operands
Vdq,Md | Vdq,Udq
Opcode
0x660F3831 /r | 0x660F3831 /r
CPU
C2++
Documented
D43
Tested by
t5678
IiyVPMOVZXBD:: PROC
    IiEmitOpcode 0x31
    JMP IiyVPMOVSXBD.QVM:
  ENDP IiyVPMOVZXBD::
↑ VPMOVZXBQ
Packed Move with Zero Extend
Intel reference
VPMOVZXBQ xmm1, xmm2/m16 VEX.128.66.0F38.WIG 32 /r
VPMOVZXBQ ymm1, xmm2/m32 VEX.256.66.0F38.WIG 32 /r
VPMOVZXBQ xmm1 {k1}{z}, xmm2/m16 EVEX.128.66.0F38.WIG 32 /r
VPMOVZXBQ ymm1 {k1}{z}, xmm2/m32 EVEX.256.66.0F38.WIG 32 /r
VPMOVZXBQ zmm1 {k1}{z}, xmm2/m64 EVEX.512.66.0F38.WIG 32 /r
Category
sse41,simdint,conver
Operands
Vdq,Mw | Vdq,Udq
Opcode
0x660F3832 /r | 0x660F3832 /r
CPU
C2++
Documented
D43
Tested by
t5678
IiyVPMOVZXBQ:: PROC
    IiEmitOpcode 0x32
    JMP IiyVPMOVSXBQ.OVM:
  ENDP IiyVPMOVZXBQ::
↑ VPMOVZXWD
Packed Move with Zero Extend
Intel reference
VPMOVZXWD xmm1, xmm2/m64 VEX.128.66.0F38.WIG 33 /r
VPMOVZXWD ymm1, xmm2/m128 VEX.256.66.0F38.WIG 33 /r
VPMOVZXWD xmm1 {k1}{z}, xmm2/m64 EVEX.128.66.0F38.WIG 33 /r
VPMOVZXWD ymm1 {k1}{z}, xmm2/m128 EVEX.256.66.0F38.WIG 33 /r
VPMOVZXWD zmm1 {k1}{z}, ymm2/m256 EVEX.512.66.0F38.WIG 33 /r
Category
sse41,simdint,conver
Operands
Vdq,Mq | Vdq,Udq
Opcode
0x660F3833 /r | 0x660F3833 /r
CPU
C2++
Documented
D43
Tested by
t5678
IiyVPMOVZXWD:: PROC
    IiEmitOpcode 0x33
    JMP IiyVPMOVSXBW.HVM:
  ENDP IiyVPMOVZXWD::
↑ VPMOVZXWQ
Packed Move with Zero Extend
Intel reference
VPMOVZXWQ xmm1, xmm2/m32 VEX.128.66.0F38.WIG 34 /r
VPMOVZXWQ ymm1, xmm2/m64 VEX.256.66.0F38.WIG 34 /r
VPMOVZXWQ xmm1 {k1}{z}, xmm2/m32 EVEX.128.66.0F38.WIG 34 /r
VPMOVZXWQ ymm1 {k1}{z}, xmm2/m64 EVEX.256.66.0F38.WIG 34 /r
VPMOVZXWQ zmm1 {k1}{z}, xmm2/m128 EVEX.512.66.0F38.WIG 34 /r
Category
sse41,simdint,conver
Operands
Vdq,Md | Vdq,Udq
Opcode
0x660F3834 /r | 0x660F3834 /r
CPU
C2++
Documented
D43
Tested by
t5678
IiyVPMOVZXWQ:: PROC
    IiEmitOpcode 0x34
    JMP IiyVPMOVSXBD.QVM:
   ENDP IiyVPMOVZXWQ::
↑ VPMOVZXDQ
Packed Move with Zero Extend
Intel reference
VPMOVZXDQ xmm1, xmm2/m64 VEX.128.66.0F38.WIG 35 /r
VPMOVZXDQ ymm1, xmm2/m128 VEX.256.66.0F38.WIG 35 /r
VPMOVZXDQ xmm1 {k1}{z}, xmm2/m64 EVEX.128.66.0F38.W0 35 /r
VPMOVZXDQ ymm1 {k1}{z}, xmm2/m128 EVEX.256.66.0F38.W0 35 /r
VPMOVZXDQ zmm1 {k1}{z}, ymm2/m256 EVEX.512.66.0F38.W0 35 /r
Category
sse41,simdint,conver
Operands
Vdq,Mq | Vdq,Udq
Opcode
0x660F3835 /r | 0x660F3835 /r
CPU
C2++
Documented
D43
Tested by
t5678
IiyVPMOVZXDQ:: PROC
    IiEmitOpcode 0x35
    JMP IiyVPMOVSXBW.HVM:
  ENDP IiyVPMOVZXDQ::
↑ VPMULDQ
Multiply Packed Signed Dword Integers
Intel reference
VPMULDQ xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F38.WIG 28 /r
VPMULDQ ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F38.WIG 28 /r
VPMULDQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F38.W1 28 /r
VPMULDQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F38.W1 28 /r
VPMULDQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F38.W1 28 /r
Category
sse41,simdint,arith
Operands
Vdq,Wdq
Opcode
0x660F3828 /r
CPU
C2++
Documented
D43
Tested by
t5680
IiyVPMULDQ:: PROC
    IiAllowModifier MASK
    IiAllowBroadcasting QWORD
    IiEmitOpcode 0x28
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV64
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F38.WIG, EVEX.NDS.128.66.0F38.W1
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F38.WIG, EVEX.NDS.256.66.0F38.W1
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W1
    RET
  ENDP IiyVPMULDQ::
↑ VPMULHRSW
Packed Multiply High with Round and Scale
Intel reference
VPMULHRSW xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F38 0B /r
VPMULHRSW ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F38 0B /r
VPMULHRSW xmm1 {k1}{z}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F38.WIG 0B /r
VPMULHRSW ymm1 {k1}{z}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F38.WIG 0B /r
VPMULHRSW zmm1 {k1}{z}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F38.WIG 0B /r
Category
ssse3,simdint
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0F380B /r | 0x660F380B /r
CPU
C2+
IiyVPMULHRSW:: PROC
    IiAllowModifier MASK
    IiEmitOpcode 0x0B
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FVM
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F38, EVEX.NDS.128.66.0F38.WIG
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F38, EVEX.NDS.256.66.0F38.WIG
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.WIG
    RET
  ENDP IiyVPMULHRSW::
↑ VPMULHUW
Multiply Packed Unsigned Integers and Store High Result
Intel reference
VPMULHUW xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F E4 /r
VPMULHUW ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F E4 /r
VPMULHUW xmm1 {k1}{z}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F.WIG E4 /r
VPMULHUW ymm1 {k1}{z}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F.WIG E4 /r
VPMULHUW zmm1 {k1}{z}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F.WIG E4 /r
Category
sse1,simdint
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0FE4 /r | 0x660FE4 /r
CPU
P3+
IiyVPMULHUW:: PROC
    IiAllowModifier MASK
    IiEmitOpcode 0xE4
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FVM
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F, EVEX.NDS.128.66.0F.WIG
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F, EVEX.NDS.256.66.0F.WIG
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F.WIG
    RET
  ENDP IiyVPMULHUW::
↑ VPMULHW
Multiply Packed Signed Integers and Store High Result
Intel reference
VPMULHW xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F E5 /r
VPMULHW ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F E5 /r
VPMULHW xmm1 {k1}{z}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F.WIG E5 /r
VPMULHW ymm1 {k1}{z}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F.WIG E5 /r
VPMULHW zmm1 {k1}{z}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F.WIG E5 /r
Category
mmx,arith
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0FE5 /r | 0x660FE5 /r
CPU
PX+
IiyVPMULHW:: PROC
    IiAllowModifier MASK
    IiEmitOpcode 0xE5
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FVM
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F, EVEX.NDS.128.66.0F.WIG
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F, EVEX.NDS.256.66.0F.WIG
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F.WIG
    RET
  ENDP IiyVPMULHW::
↑ VPMULLD
Multiply Packed Signed Dword Integers and Store Low Result
Intel reference
VPMULLD xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F38.WIG 40 /r
VPMULLD ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F38.WIG 40 /r
VPMULLD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.66.0F38.W0 40 /r
VPMULLD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.66.0F38.W0 40 /r
VPMULLD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.66.0F38.W0 40 /r
VPMULLD zmm1 {k1}, zmm2, zmm3/m512/m32bcst MVEX.NDS.512.66.0F38.W0 40 /r
Category
sse41,simdint,arith
Operands
Vdq,Wdq
Opcode
0x660F3840 /r
CPU
C2++
Documented
D43
Tested by
t5680
IiyVPMULLD:: PROC
    IiAllowModifier MASK
    IiAllowBroadcasting DWORD
    IiEmitOpcode 0x40
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV32
    IiDisp8MVEX Si32
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F38.WIG, EVEX.NDS.128.66.0F38.W0
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F38.WIG, EVEX.NDS.256.66.0F38.W0
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W0, MVEX.NDS.512.66.0F38.W0
    RET
  ENDP IiyVPMULLD::
↑ VPMULLQ
Multiply Packed QWORD Signed Integers and Store Low Result
Intel reference
VPMULLQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F38.W1 40 /r
VPMULLQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F38.W1 40 /r
VPMULLQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F38.W1 40 /r
Opcode
0x40
Tested by
t5680
                    array(),
IiyVPMULLQ:: PROC
    IiAllowModifier MASK
    IiAllowBroadcasting QWORD
    IiEmitOpcode 0x40
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV64
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F38.W1
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F38.W1
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W1
    RET
  ENDP IiyVPMULLQ::
↑ VPMULLW
Multiply Packed Signed Integers and Store Low Result
Intel reference
VPMULLW xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F D5 /r
VPMULLW ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F D5 /r
VPMULLW xmm1 {k1}{z}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F.WIG D5 /r
VPMULLW ymm1 {k1}{z}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F.WIG D5 /r
VPMULLW zmm1 {k1}{z}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F.WIG D5 /r
Category
mmx,arith
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0FD5 /r | 0x660FD5 /r
CPU
PX+
IiyVPMULLW:: PROC
    IiAllowModifier MASK
    IiEmitOpcode 0xD5
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FVM
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F, EVEX.NDS.128.66.0F.WIG
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F, EVEX.NDS.256.66.0F.WIG
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F.WIG
    RET
  ENDP IiyVPMULLW::
↑ VPAVGB
Average Packed unsigned BYTE Integers
Intel reference
VPAVGB xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F E0 /r
VPAVGB ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F E0 /r
VPAVGB xmm1 {k1}{z}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F.WIG E0 /r
VPAVGB ymm1 {k1}{z}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F.WIG E0 /r
VPAVGB zmm1 {k1}{z}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F.WIG E0 /r
Category
sse1,simdint
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0FE0 /r | 0x660FE0 /r
CPU
P3+
Tested by
t5684
IiyVPAVGB:: PROC
    IiEncoding DATA=BYTE
    IiAllowModifier MASK
    IiEmitOpcode 0xE0
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FVM
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F, EVEX.NDS.128.66.0F.WIG
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F, EVEX.NDS.256.66.0F.WIG
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F.WIG
    RET
  ENDP IiyVPAVGB::
↑ VPAVGW
Average Packed unsigned WORD Integers
Intel reference
VPAVGW xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F E3 /r
VPAVGW ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F E3 /r
VPAVGW xmm1 {k1}{z}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F.WIG E3 /r
VPAVGW ymm1 {k1}{z}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F.WIG E3 /r
VPAVGW zmm1 {k1}{z}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F.WIG E3 /r
Category
sse1,simdint
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0FE3 /r | 0x660FE3 /r
CPU
P3+
Tested by
t5684
IiyVPAVGW:: PROC
    IiEncoding DATA=WORD
    IiAllowModifier MASK
    IiEmitOpcode 0xE3
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FVM
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F, EVEX.NDS.128.66.0F.WIG
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F, EVEX.NDS.256.66.0F.WIG
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F.WIG
    RET
  ENDP IiyVPAVGW::
↑ VPMASKMOVD
Conditional SIMD Integer Packed Loads and Stores
Intel reference
VPMASKMOVD xmm1, xmm2, m128 VEX.NDS.128.66.0F38.W0 8C /r
VPMASKMOVD ymm1, ymm2, m256 VEX.NDS.256.66.0F38.W0 8C /r
VPMASKMOVD m128, xmm1, xmm2 VEX.NDS.128.66.0F38.W0 8E /r
VPMASKMOVD m256, ymm1, ymm2 VEX.NDS.256.66.0F38.W0 8E /r
Opcode
0x8C | 0x8E
Tested by
t5468 t5668
IiyVPMASKMOVD:: PROC
    IiModRM /r
    CMP DL,mem
    JE .M:
    IiEmitOpcode 0x8E
    IiOpEn MVR
    IiDispatchFormat mem.xmm.xmm, mem.ymm.ymm
.M: IiEmitOpcode 0x8C ; The last operand is in memory.
    IiOpEn RVM
    IiDispatchFormat  xmm.xmm.mem, ymm.ymm.mem
.xmm.xmm.mem:
.mem.xmm.xmm:
    IiEmitPrefix VEX.NDS.128.66.0F38.W0
    RET
.ymm.ymm.mem:
.mem.ymm.ymm:
    IiEmitPrefix VEX.NDS.256.66.0F38.W0
    RET
  ENDP IiyVPMASKMOVD::
↑ VPMASKMOVQ
Conditional SIMD Integer Packed Loads and Stores
Intel reference
VPMASKMOVQ xmm1, xmm2, m128 VEX.NDS.128.66.0F38.W1 8C /r
VPMASKMOVQ ymm1, ymm2, m256 VEX.NDS.256.66.0F38.W1 8C /r
VPMASKMOVQ m128, xmm1, xmm2 VEX.NDS.128.66.0F38.W1 8E /r
VPMASKMOVQ m256, ymm1, ymm2 VEX.NDS.256.66.0F38.W1 8E /r
Opcode
0x8C | 0x8E
Tested by
t5468 t5668
IiyVPMASKMOVQ:: PROC
    IiModRM /r
    CMP DL,mem
    JE .M:
    IiEmitOpcode 0x8E
    IiOpEn MVR
    IiDispatchFormat mem.xmm.xmm, mem.ymm.ymm
.M: IiEmitOpcode 0x8C ; The last operand is in memory.
    IiOpEn RVM
    IiDispatchFormat  xmm.xmm.mem, ymm.ymm.mem
.xmm.xmm.mem:
.mem.xmm.xmm:
    IiEmitPrefix VEX.NDS.128.66.0F38.W1
    RET
.ymm.ymm.mem:
.mem.ymm.ymm:
    IiEmitPrefix VEX.NDS.256.66.0F38.W1
    RET
  ENDP IiyVPMASKMOVQ::
↑ VMASKMOVPS
Conditional SIMD Packed Loads and Stores
Intel reference
VMASKMOVPS xmm1, xmm2, m128 VEX.NDS.128.66.0F38.W0 2C /r
VMASKMOVPS ymm1, ymm2, m256 VEX.NDS.256.66.0F38.W0 2C /r
VMASKMOVPS m128,xmm1, xmm2 VEX.NDS.128.66.0F38.W0 2E /r
VMASKMOVPS m256,ymm1, ymm2 VEX.NDS.256.66.0F38.W0 2E /r
Opcode
0x2C | 0x2E
Tested by
t5668
IiyVMASKMOVPS:: PROC
    IiModRM /r
    CMP DL,mem
    JE .M:
    IiEmitOpcode 0x2E
    IiOpEn MVR
    IiDispatchFormat  mem.xmm.xmm, mem.ymm.ymm
.M: IiEmitOpcode 0x2C
    IiOpEn RVM
    IiDispatchFormat  xmm.xmm.mem, ymm.ymm.mem
.mem.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F38.W0
    RET
.mem.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F38.W0
    RET
  ENDP IiyVMASKMOVPS::
↑ VMASKMOVPD
Conditional SIMD Packed Loads and Stores
Intel reference
VMASKMOVPD xmm1, xmm2, m128 VEX.NDS.128.66.0F38.W0 2D /r
VMASKMOVPD ymm1, ymm2, m256 VEX.NDS.256.66.0F38.W0 2D /r
VMASKMOVPD m128, xmm1, xmm2 VEX.NDS.128.66.0F38.W0 2F /r
VMASKMOVPD m256, ymm1, ymm2 VEX.NDS.256.66.0F38.W0 2F /r
Opcode
0x2D | 0x2F
Tested by
t5668
IiyVMASKMOVPD:: PROC
    IiModRM /r
    CMP DL,mem
    JE .M:
    IiEmitOpcode 0x2F
    IiOpEn MVR
    IiDispatchFormat  mem.xmm.xmm, mem.ymm.ymm
.M: IiEmitOpcode 0x2D
    IiOpEn RVM
    IiDispatchFormat  xmm.xmm.mem, ymm.ymm.mem
.mem.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F38.W0
    RET
.mem.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F38.W0
    RET
  ENDP IiyVMASKMOVPD::
↑ VMASKMOVDQU
Store Selected Bytes of Double Quadword
Intel reference
VMASKMOVDQU xmm1, xmm2 VEX.128.66.0F.WIG F7 /r
Category
sse2,cachect
Operands
BDdq ,Vdq,Udq
Opcode
0x660FF7 /r
CPU
P4+
Tested by
t5668
IiyVMASKMOVDQU:: PROC
    IiEmitOpcode 0xF7
    IiOpEn RM
    IiModRM /r
    IiDispatchFormat  xmm.xmm
.xmm.xmm:
    IiEmitPrefix VEX.128.66.0F.WIG
    RET
  ENDP IiyVMASKMOVDQU::
↑ VPMOVMSKB
Move Byte Mask
Intel reference
VPMOVMSKB reg, xmm1 VEX.128.66.0F.WIG D7 /r
VPMOVMSKB reg, ymm1 VEX.256.66.0F.WIG D7 /r
Category
sse1,simdint
Operands
Gdqp,Nq | Gdqp,Udq
Opcode
0x0FD7 /r | 0x660FD7 /r
CPU
P3+
Tested by
t5668
IiyVPMOVMSKB:: PROC
    IiEmitOpcode 0xD7
    IiOpEn RM
    IiModRM /r
    IiDispatchFormat  r32.xmm, r32.ymm, r64.xmm, r64.ymm
.r32.xmm:
.r64.xmm:
    IiEmitPrefix VEX.128.66.0F.WIG
    RET
.r32.ymm:
.r64.ymm:
    IiEmitPrefix VEX.256.66.0F.WIG
    RET
  ENDP IiyVPMOVMSKB::
↑ VPBLENDW
Blend Packed Words
Intel reference
VPBLENDW xmm1, xmm2, xmm3/m128, imm8 VEX.NDS.128.66.0F3A.WIG 0E /r ib
VPBLENDW ymm1, ymm2, ymm3/m256, imm8VEX.NDS.256.66.0F3A.WIG 0E /r ib
Category
sse41,simdint,datamov
Operands
Vdq,Wdq,Ib
Opcode
0x660F3A0E /r
CPU
C2++
Documented
D43
Tested by
t5256
IiyVPBLENDW:: PROC
    IiEmitOpcode 0x0E
.op:IiOpEn RVM
    IiModRM /r
    IiEmitImm Operand4, BYTE
    IiDispatchFormat  xmm.xmm.xmm.imm, xmm.xmm.mem.imm, ymm.ymm.ymm.imm, ymm.ymm.mem.imm
.xmm.xmm.xmm.imm:
.xmm.xmm.mem.imm:
    IiEmitPrefix VEX.NDS.128.66.0F3A.WIG
    RET
.ymm.ymm.ymm.imm:
.ymm.ymm.mem.imm:
    IiEmitPrefix VEX.NDS.256.66.0F3A.WIG
    RET
  ENDP IiyVPBLENDW::
↑ VPBLENDD
Blend Packed Dwords
Description
VPBLENDD
Intel reference
VPBLENDD xmm1, xmm2, xmm3/m128, imm8 VEX.NDS.128.66.0F3A.W0 02 /r ib
VPBLENDD ymm1, ymm2, ymm3/m256, imm8VEX.NDS.256.66.0F3A.W0 02 /r ib
Opcode
0x02
Tested by
t5256
IiyVPBLENDD:: PROC
    IiEmitOpcode 0x02
    JMP IiyVPBLENDW.op:
  ENDP IiyVPBLENDD::
↑ VBLENDPS
Blend Packed Single-FP Values
Intel reference
VBLENDPS xmm1, xmm2, xmm3/m128, imm8 VEX.NDS.128.66.0F3A.WIG 0C /r ib
VBLENDPS ymm1, ymm2, ymm3/m256, imm8 VEX.NDS.256.66.0F3A.WIG 0C /r ib
Category
sse41,simdfp,datamov
Operands
Vps,Wps,Ib
Opcode
0x660F3A0C /r
CPU
C2++
Documented
D43
Tested by
t5256
IiyVBLENDPS:: PROC
    IiEmitOpcode 0x0C
    JMP IiyVPBLENDW.op:
  ENDP IiyVBLENDPS::
↑ VBLENDPD
Blend Packed Double-FP Values
Intel reference
VBLENDPD xmm1, xmm2, xmm3/m128, imm8 VEX.NDS.128.66.0F3A.WIG 0D /r ib
VBLENDPD ymm1, ymm2, ymm3/m256, imm8 VEX.NDS.256.66.0F3A.WIG 0D /r ib
Category
sse41,simdfp,datamov
Operands
Vpd,Wpd,Ib
Opcode
0x660F3A0D /r
CPU
C2++
Documented
D43
Tested by
t5256
IiyVBLENDPD:: PROC
    IiEmitOpcode 0x0D
    JMP IiyVPBLENDW.op:
  ENDP IiyVBLENDPD::
↑ VBLENDVPS
Variable Blend Packed Single-FP Values
Intel reference
VBLENDVPS xmm1, xmm2, xmm3/m128, xmm4 VEX.NDS.128.66.0F3A.W0 4A /r /is4
VBLENDVPS ymm1, ymm2, ymm3/m256, ymm4 VEX.NDS.256.66.0F3A.W0 4A /r /is4
Category
sse41,simdint,datamov
Operands
Vps,Wps,XMM0
Opcode
0x660F3814 /r
CPU
C2++
Documented
D43
Tested by
t5258
IiyVBLENDVPS:: PROC
    IiEmitOpcode 0x4A
.op:IiOpEn RVM
    IiModRM /r
    IiIs4 Operand4
    IiDispatchFormat  xmm.xmm.xmm.xmm, xmm.xmm.mem.xmm, ymm.ymm.ymm.ymm, ymm.ymm.mem.ymm
.xmm.xmm.xmm.xmm:
.xmm.xmm.mem.xmm:
    IiEmitPrefix VEX.NDS.128.66.0F3A.W0
    RET
.ymm.ymm.ymm.ymm:
.ymm.ymm.mem.ymm:
    IiEmitPrefix VEX.NDS.256.66.0F3A.W0
    RET
  ENDP IiyVBLENDVPS::
↑ VBLENDVPD
Variable Blend Packed Double-FP Values
Intel reference
VBLENDVPD xmm1, xmm2, xmm3/m128, xmm4 VEX.NDS.128.66.0F3A.W0 4B /r /is4
VBLENDVPD ymm1, ymm2, ymm3/m256, ymm4 VEX.NDS.256.66.0F3A.W0 4B /r /is4
Category
sse41,simdint,datamov
Operands
Vpd,Wpd,XMM0
Opcode
0x660F3815 /r
CPU
C2++
Documented
D43
Tested by
t5258
IiyVBLENDVPD:: PROC
    IiEmitOpcode 0x4B
    JMP IiyVBLENDVPS.op:
  ENDP IiyVBLENDVPD::
↑ VPBLENDVB
Variable Blend Packed Bytes
Intel reference
VPBLENDVB xmm1, xmm2, xmm3/m128, xmm4 VEX.NDS.128.66.0F3A.W0 4C /r /is4
VPBLENDVB ymm1, ymm2, ymm3/m256, ymm4VEX.NDS.256.66.0F3A.W0 4C /r /is4
Category
sse41,simdint,datamov
Operands
Vdq,Wdq,XMM0
Opcode
0x660F3810 /r
CPU
C2++
Documented
D43
Tested by
t5258
IiyVPBLENDVB:: PROC
    IiEmitOpcode 0x4C
    JMP IiyVBLENDVPS.op:
  ENDP IiyVPBLENDVB::
↑ VLDDQU
Load Unaligned Integer 128 Bits
Intel reference
VLDDQU xmm1, m128 VEX.128.F2.0F.WIG F0 /r
VLDDQU ymm1, m256VEX.256.F2.0F.WIG F0 /r
Category
sse3,cachect
Operands
Vdq,Mdq
Opcode
0xF20FF0 /r
CPU
P4++
See also
VLDQQU.
Tested by
t5470
IiyVLDDQU:: PROC
    IiEmitOpcode 0xF0
    IiOpEn RM
    IiModRM /r
    IiDispatchFormat  xmm.mem, ymm.mem
.xmm.mem:
    IiEmitPrefix VEX.128.F2.0F.WIG
    RET
.ymm.mem:
    IiEmitPrefix VEX.256.F2.0F.WIG
    RET
  ENDP IiyVLDDQU::
↑ VLDQQU
Load Unaligned Integer 256 Bits
Intel reference
VLDQQU ymm1, m256VEX.256.F2.0F.WIG F0 /r
Opcode
0xF0
Documented
UNDOC, see NASM
See also
VLDDQU.
Tested by
t5470
IiyVLDQQU:: PROC
    IiRequire UNDOC
    IiDispatchFormat  ymm.mem
.ymm.mem: JMP IiyVLDDQU:
  ENDP IiyVLDQQU::
↑ VLDMXCSR
Load MXCSR Register
Intel reference
VLDMXCSR m32VEX.LZ.0F.WIG AE /2
Category
sse1,mxcsrsm
Operands
Md
Opcode
0x0FAE /2
CPU
P3+
Tested by
t3700
IiyVLDMXCSR:: PROC
    IiEmitOpcode 0xAE
    IiOpEn M
    IiModRM /2
    IiDispatchFormat  mem 
.mem:
    IiEmitPrefix VEX.LZ.0F.WIG
    RET
  ENDP IiyVLDMXCSR::
↑ VSTMXCSR
Store MXCSR Register State
Intel reference
VSTMXCSR m32VEX.LZ.0F.WIG AE /3
Category
sse1,mxcsrsm
Operands
Md
Opcode
0x0FAE /3
CPU
P3+
Tested by
t3700
IiyVSTMXCSR:: PROC
    IiEmitOpcode 0xAE
    IiOpEn M
    IiModRM /3
    IiDispatchFormat  mem
.mem:
    IiEmitPrefix VEX.LZ.0F.WIG
    RET
  ENDP IiyVSTMXCSR::
↑ VRSQRTSS
Compute Recipr. of Square Root of Scalar Single-FP Value
Intel reference
VRSQRTSS xmm1, xmm2, xmm3/m32 VEX.NDS.LIG.F3.0F.WIG 52 /r
Category
sse1,simdfp,arith
Operands
Vss,Wss
Opcode
0xF30F52 /r
CPU
P3+
Tested by
t5655
IiyVRSQRTSS:: PROC
    IiEmitOpcode 0x52
.op:IiOpEn RVM
    IiModRM /r
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.LIG.F3.0F.WIG
    RET
  ENDP IiyVRSQRTSS::
↑ VRCPSS
Compute Reciprocal of Scalar Single-FP Values
Intel reference
VRCPSS xmm1, xmm2, xmm3/m32 VEX.NDS.LIG.F3.0F.WIG 53 /r
Category
sse1,simdfp,arith
Operands
Vss,Wss
Opcode
0xF30F53 /r
CPU
P3+
Tested by
t5655
IiyVRCPSS:: PROC
    IiEmitOpcode 0x53
    JMP IiyVRSQRTSS.op:
  ENDP IiyVRCPSS::
↑ VRSQRTPS
Compute Recipr. of Square Roots of Packed Single-FP Values
Intel reference
VRSQRTPS xmm1, xmm2/m128 VEX.128.0F.WIG 52 /r
VRSQRTPS ymm1, ymm2/m256 VEX.256.0F.WIG 52 /r
Category
sse1,simdfp,arith
Operands
Vps,Wps
Opcode
0x0F52 /r
CPU
P3+
Tested by
t5655
IiyVRSQRTPS:: PROC
    IiEmitOpcode 0x52
.op:IiOpEn RM
    IiModRM /r
    IiDispatchFormat  xmm.xmm, xmm.mem, ymm.ymm, ymm.mem
.xmm.xmm:
.xmm.mem:
    IiEmitPrefix VEX.128.0F.WIG
    RET
.ymm.ymm:
.ymm.mem:
    IiEmitPrefix VEX.256.0F.WIG
    RET
  ENDP IiyVRSQRTPS::
↑ VRCPPS
Compute Reciprocals of Packed Single-FP Values
Intel reference
VRCPPS xmm1, xmm2/m128 VEX.128.0F.WIG 53 /r
VRCPPS ymm1, ymm2/m256 VEX.256.0F.WIG 53 /r
Category
sse1,simdfp,arith
Operands
Vps,Wps
Opcode
0x0F53 /r
CPU
P3+
Tested by
t5655
IiyVRCPPS:: PROC
    IiEmitOpcode 0x53
    JMP IiyVRSQRTPS.op:
  ENDP IiyVRCPPS::
↑ VUNPCKLPS
Unpack and Interleave Low Packed Single-FP Values
Intel reference
VUNPCKLPS xmm1,xmm2, xmm3/m128 VEX.NDS.128.0F.WIG 14 /r
VUNPCKLPS ymm1,ymm2,ymm3/m256 VEX.NDS.256.0F.WIG 14 /r
VUNPCKLPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.0F.W0 14 /r
VUNPCKLPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.0F.W0 14 /r
VUNPCKLPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.0F.W0 14 /r
Category
sse1,simdfp,shunpck
Operands
Vps,Wq
Opcode
0x0F14 /r
CPU
P3+
IiyVUNPCKLPS:: PROC
    IiEmitOpcode 0x14
.op:IiAllowModifier MASK
    IiAllowBroadcasting DWORD
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV32
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.0F.WIG, EVEX.NDS.128.0F.W0
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.0F.WIG, EVEX.NDS.256.0F.W0
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.0F.W0
    RET
  ENDP IiyVUNPCKLPS::
↑ VUNPCKHPS
Unpack and Interleave High Packed Single-FP Values
Intel reference
VUNPCKHPS xmm1, xmm2, xmm3/m128 VEX.NDS.128.0F.WIG 15 /r
VUNPCKHPS ymm1, ymm2, ymm3/m256 VEX.NDS.256.0F.WIG 15 /r
VUNPCKHPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.0F.W0 15 /r
VUNPCKHPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.0F.W0 15 /r
VUNPCKHPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.0F.W0 15 /r
Category
sse1,simdfp,shunpck
Operands
Vps,Wq
Opcode
0x0F15 /r
CPU
P3+
IiyVUNPCKHPS:: PROC
    IiEmitOpcode 0x15
    JMP IiyVUNPCKLPS.op:
  ENDP IiyVUNPCKHPS::
↑ VUNPCKLPD
Unpack and Interleave Low Packed Double-FP Values
Intel reference
VUNPCKLPD xmm1,xmm2, xmm3/m128 VEX.NDS.128.66.0F.WIG 14 /r
VUNPCKLPD ymm1,ymm2, ymm3/m256 VEX.NDS.256.66.0F.WIG 14 /r
VUNPCKLPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F.W1 14 /r
VUNPCKLPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F.W1 14 /r
VUNPCKLPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F.W1 14 /r
Category
sse2,pcksclr,shunpck
Operands
Vpd,Wpd
Opcode
0x660F14 /r
CPU
P4+
IiyVUNPCKLPD:: PROC
    IiEmitOpcode 0x14
.op:IiAllowModifier MASK
    IiAllowBroadcasting QWORD
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV64
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F.WIG, EVEX.NDS.128.66.0F.W1
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F.WIG, EVEX.NDS.256.66.0F.W1
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F.W1
    RET
  ENDP IiyVUNPCKLPD::
↑ VUNPCKHPD
Unpack and Interleave High Packed Double-FP Values
Intel reference
VUNPCKHPD xmm1,xmm2, xmm3/m128 VEX.NDS.128.66.0F.WIG 15 /r
VUNPCKHPD ymm1,ymm2, ymm3/m256 VEX.NDS.256.66.0F.WIG 15 /r
VUNPCKHPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F.W1 15 /r
VUNPCKHPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F.W1 15 /r
VUNPCKHPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F.W1 15 /r
Category
sse2,pcksclr,shunpck
Operands
Vpd,Wpd
Opcode
0x660F15 /r
CPU
P4+
IiyVUNPCKHPD:: PROC
    IiEmitOpcode 0x15
    JMP IiyVUNPCKLPD.op:
  ENDP IiyVUNPCKHPD::
↑ VPUNPCKLBW
Unpack Low Data
Intel reference
VPUNPCKLBW xmm1,xmm2, xmm3/m128 VEX.NDS.128.66.0F.WIG 60 /r
VPUNPCKLBW ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F.WIG 60 /r
VPUNPCKLBW xmm1 {k1}{z}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F.WIG 60 /r
VPUNPCKLBW ymm1 {k1}{z}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F.WIG 60 /r
VPUNPCKLBW zmm1 {k1}{z}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F.WIG 60 /r
Category
mmx,unpack
Operands
Pq,Qd | Vdq,Wdq
Opcode
0x0F60 /r | 0x660F60 /r
CPU
PX+
Tested by
t5692
IiyVPUNPCKLBW:: PROC
    IiEmitOpcode 0x60
.op:IiAllowModifier MASK
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FVM
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F.WIG, EVEX.NDS.128.66.0F.WIG
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F.WIG, EVEX.NDS.256.66.0F.WIG
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F.WIG
    RET
  ENDP IiyVPUNPCKLBW::
↑ VPUNPCKLWD
Unpack Low Data
Intel reference
VPUNPCKLWD xmm1,xmm2, xmm3/m128 VEX.NDS.128.66.0F.WIG 61 /r
VPUNPCKLWD ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F.WIG 61 /r
VPUNPCKLWD xmm1 {k1}{z}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F.WIG 61 /r
VPUNPCKLWD ymm1 {k1}{z}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F.WIG 61 /r
VPUNPCKLWD zmm1 {k1}{z}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F.WIG 61 /r
Category
mmx,unpack
Operands
Pq,Qd | Vdq,Wdq
Opcode
0x0F61 /r | 0x660F61 /r
CPU
PX+
Tested by
t5692
IiyVPUNPCKLWD:: PROC
    IiEmitOpcode 0x61
    JMP IiyVPUNPCKLBW.op:
  ENDP IiyVPUNPCKLWD::
↑ VPUNPCKLDQ
Unpack Low Data
Intel reference
VPUNPCKLDQ xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F.WIG 62 /r
VPUNPCKLDQ ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F.WIG 62 /r
VPUNPCKLDQ xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.66.0F.W0 62 /r
VPUNPCKLDQ ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.66.0F.W0 62 /r
VPUNPCKLDQ zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.66.0F.W0 62 /r
Category
mmx,unpack
Operands
Pq,Qd | Vdq,Wdq
Opcode
0x0F62 /r | 0x660F62 /r
CPU
PX+
Tested by
t5692
IiyVPUNPCKLDQ:: PROC
    IiEmitOpcode 0x62
.op:IiAllowModifier MASK
    IiAllowBroadcasting DWORD
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV32
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F.WIG, EVEX.NDS.128.66.0F.W0
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F.WIG, EVEX.NDS.256.66.0F.W0
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F.W0
    RET
  ENDP IiyVPUNPCKLDQ::
↑ VPUNPCKLQDQ
Unpack Low Data
Intel reference
VPUNPCKLQDQ xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F.WIG 6C /r
VPUNPCKLQDQ ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F.WIG 6C /r
VPUNPCKLQDQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F.W1 6C /r
VPUNPCKLQDQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F.W1 6C /r
VPUNPCKLQDQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F.W1 6C /r
Category
sse2,simdint,shunpck
Operands
Vdq,Wdq
Opcode
0x660F6C /r
CPU
P4+
Tested by
t5692
IiyVPUNPCKLQDQ:: PROC
    IiEmitOpcode 0x6C
    JMP IiyVUNPCKLPD.op:
  ENDP IiyVPUNPCKLQDQ::
↑ VPUNPCKHBW
Unpack High Data
Intel reference
VPUNPCKHBW xmm1,xmm2, xmm3/m128 VEX.NDS.128.66.0F.WIG 68 /r
VPUNPCKHBW ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F.WIG 68 /r
VPUNPCKHBW xmm1 {k1}{z}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F.WIG 68 /r
VPUNPCKHBW ymm1 {k1}{z}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F.WIG 68 /r
VPUNPCKHBW zmm1 {k1}{z}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F.WIG 68 /r
Category
mmx,unpack
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0F68 /r | 0x660F68 /r
CPU
PX+
Tested by
t5694
IiyVPUNPCKHBW:: PROC
    IiEmitOpcode 0x68
    JMP IiyVPUNPCKLBW.op:
  ENDP IiyVPUNPCKHBW::
↑ VPUNPCKHWD
Unpack High Data
Intel reference
VPUNPCKHWD xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F.WIG 69 /r
VPUNPCKHWD ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F.WIG 69 /r
VPUNPCKHWD xmm1 {k1}{z}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F.WIG 69 /r
VPUNPCKHWD ymm1 {k1}{z}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F.WIG 69 /r
VPUNPCKHWD zmm1 {k1}{z}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F.WIG 69 /r
Category
mmx,unpack
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0F69 /r | 0x660F69 /r
CPU
PX+
Tested by
t5694
IiyVPUNPCKHWD:: PROC
    IiEmitOpcode 0x69
    JMP IiyVPUNPCKLBW.op:
  ENDP IiyVPUNPCKHWD::
↑ VPUNPCKHDQ
Unpack High Data
Intel reference
VPUNPCKHDQ xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F.WIG 6A /r
VPUNPCKHDQ ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F.WIG 6A /r
VPUNPCKHDQ xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.66.0F.W0 6A /r
VPUNPCKHDQ ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.66.0F.W0 6A /r
VPUNPCKHDQ zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.66.0F.W0 6A /r
Category
mmx,unpack
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0F6A /r | 0x660F6A /r
CPU
PX+
Tested by
t5694
IiyVPUNPCKHDQ:: PROC
    IiEmitOpcode 0x6A
    JMP IiyVPUNPCKLDQ.op:
  ENDP IiyVPUNPCKHDQ::
↑ VPUNPCKHQDQ
Unpack High Data
Intel reference
VPUNPCKHQDQ xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F.WIG 6D /r
VPUNPCKHQDQ ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F.WIG 6D /r
VPUNPCKHQDQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F.W1 6D /r
VPUNPCKHQDQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F.W1 6D /r
VPUNPCKHQDQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F.W1 6D /r
Category
sse2,simdint,shunpck
Operands
Vdq,Wdq
Opcode
0x660F6D /r
CPU
P4+
Tested by
t5694
IiyVPUNPCKHQDQ:: PROC
    IiEmitOpcode 0x6D
    JMP IiyVUNPCKLPD.op:
  ENDP IiyVPUNPCKHQDQ::
↑ VPACKSSWB
Pack with Signed Saturation
Intel reference
VPACKSSWB xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F 63 /r
VPACKSSWB ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F 63 /r
VPACKSSWB xmm1 {k1}{z}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F.WIG 63 /r
VPACKSSWB ymm1 {k1}{z}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F.WIG 63 /r
VPACKSSWB zmm1 {k1}{z}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F.WIG 63 /r
Category
mmx,conver
Operands
Pq,Qd | Vdq,Wdq
Opcode
0x0F63 /r | 0x660F63 /r
CPU
PX+
Tested by
t5696
IiyVPACKSSWB:: PROC
    IiEmitOpcode 0x63
    JMP IiyVPUNPCKLBW.op:
  ENDP IiyVPACKSSWB::
↑ VPACKSSDW
Pack with Signed Saturation
Intel reference
VPACKSSDW xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F 6B /r
VPACKSSDW ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F 6B /r
VPACKSSDW xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.66.0F.W0 6B /r
VPACKSSDW ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.66.0F.W0 6B /r
VPACKSSDW zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.66.0F.W0 6B /r
Category
mmx,conver
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0F6B /r | 0x660F6B /r
CPU
PX+
Tested by
t5696
IiyVPACKSSDW:: PROC
    IiEmitOpcode 0x6B
    JMP IiyVPUNPCKLDQ.op:
  ENDP IiyVPACKSSDW::
↑ VPACKUSWB
Pack with Unsigned Saturation
Intel reference
VPACKUSWB xmm1,xmm2, xmm3/m128 VEX.NDS.128.66.0F 67 /r
VPACKUSWB ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F 67 /r
VPACKUSWB xmm1{k1}{z}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F.WIG 67 /r
VPACKUSWB ymm1{k1}{z}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F.WIG 67 /r
VPACKUSWB zmm1{k1}{z}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F.WIG 67 /r
Category
mmx,conver
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0F67 /r | 0x660F67 /r
CPU
PX+
Tested by
t5696
IiyVPACKUSWB:: PROC
    IiEmitOpcode 0x67
    JMP IiyVPUNPCKLBW.op:
  ENDP IiyVPACKUSWB::
↑ VPACKUSDW
Pack with Unsigned Saturation
Intel reference
VPACKUSDW xmm1,xmm2, xmm3/m128 VEX.NDS.128.66.0F38 2B /r
VPACKUSDW ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F38 2B /r
VPACKUSDW xmm1{k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.66.0F38.W0 2B /r
VPACKUSDW ymm1{k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.66.0F38.W0 2B /r
VPACKUSDW zmm1{k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.66.0F38.W0 2B /r
Category
sse41,simdint,conver
Operands
Vdq,Wdq
Opcode
0x660F382B /r
CPU
C2++
Documented
D43
Tested by
t5696
IiyVPACKUSDW:: PROC
    IiRequire SSE4.1
    IiEmitOpcode 0x2B
    IiAllowModifier MASK
    IiAllowBroadcasting DWORD
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV32
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F38, EVEX.NDS.128.66.0F38.W0
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F38, EVEX.NDS.256.66.0F38.W0
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W0
    RET
  ENDP IiyVPACKUSDW::
↑ VSCALEFSS
Scale Scalar Float32 Value With Float32 Value
Intel reference
VSCALEFSS xmm1 {k1}{z}, xmm2, xmm3/m32{er} EVEX.NDS.LIG.66.0F38.W0 2D /r
Opcode
0x2D
Tested by
t5710
IiyVSCALEFSS:: PROC
    IiAllowModifier MASK
    IiAllowRounding Register=xmm
    IiEmitOpcode 0x2D
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX T1S32
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.NDS.LIG.66.0F38.W0
    RET
  ENDP IiyVSCALEFSS::
↑ VSCALEFSD
Scale Scalar Float64 Values With Float64 Values
Intel reference
VSCALEFSD xmm1 {k1}{z}, xmm2, xmm3/m64{er} EVEX.NDS.LIG.66.0F38.W1 2D /r T
Opcode
0x2D
Tested by
t5710
IiyVSCALEFSD:: PROC
    IiAllowModifier MASK
    IiAllowRounding Register=xmm
    IiEmitOpcode 0x2D
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX T1S64
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.NDS.LIG.66.0F38.W1
    RET
  ENDP IiyVSCALEFSD::
↑ VSCALEFPS
Scale Packed Float32 Values With Float32 Values
Intel reference
VSCALEFPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.66.0F38.W0 2C /r
VSCALEFPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.66.0F38.W0 2C /r
VSCALEFPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{er} EVEX.NDS.512.66.0F38.W0 2C /r
Opcode
0x2C
Tested by
t5710
IiyVSCALEFPS:: PROC
    IiAllowModifier MASK
    IiAllowRounding
    IiAllowBroadcasting DWORD
    IiEmitOpcode 0x2C
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV32
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F38.W0
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F38.W0
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W0
    RET
  ENDP IiyVSCALEFPS::
↑ VSCALEFPD
Scale Packed Float64 Values With Float64 Values
Intel reference
VSCALEFPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F38.W1 2C /r
VSCALEFPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F38.W1 2C /r
VSCALEFPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er} EVEX.NDS.512.66.0F38.W1 2C /r
Opcode
0x2C
Tested by
t5710
IiyVSCALEFPD:: PROC
    IiAllowModifier MASK
    IiAllowRounding
    IiAllowBroadcasting QWORD
    IiEmitOpcode 0x2C
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV64
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F38.W1
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F38.W1
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W1
    RET
  ENDP IiyVSCALEFPD::
↑ VRNDSCALESS
Round Scalar Float32 Value To Include A Given Number Of Fraction Bits
Intel reference
VRNDSCALESS xmm1 {k1}{z}, xmm2, xmm3/m32{sae}, imm8 EVEX.NDS.LIG.66.0F3A.W0 0A /r ib
Opcode
0x0A
Tested by
t5712
IiyVRNDSCALESS:: PROC
    IiAllowModifier MASK
    IiAllowSuppressing Operand=DH, Register=xmm
    IiEmitOpcode 0x0A
    IiOpEn RVM
    IiModRM /r
    IiEmitImm Operand4, BYTE
    IiDisp8EVEX T1S32
    IiDispatchFormat  xmm.xmm.xmm.imm, xmm.xmm.mem.imm
.xmm.xmm.xmm.imm:
.xmm.xmm.mem.imm:
    IiEmitPrefix EVEX.NDS.LIG.66.0F3A.W0
    RET
  ENDP IiyVRNDSCALESS::
↑ VRNDSCALESD
Round Scalar Float64 Value To Include A Given Number Of Fraction Bits
Intel reference
VRNDSCALESD xmm1 {k1}{z}, xmm2, xmm3/m64{sae}, imm8 EVEX.NDS.LIG.66.0F3A.W1 0B /r ib
Opcode
0x0B
Tested by
t5712
IiyVRNDSCALESD:: PROC
    IiAllowModifier MASK
    IiAllowSuppressing Operand=DH, Register=xmm
    IiEmitOpcode 0x0B 
    IiOpEn RVM
    IiModRM /r
    IiEmitImm Operand4, BYTE
    IiDisp8EVEX T1S64
    IiDispatchFormat  xmm.xmm.xmm.imm, xmm.xmm.mem.imm
.xmm.xmm.xmm.imm:
.xmm.xmm.mem.imm:
    IiEmitPrefix EVEX.NDS.LIG.66.0F3A.W1
    RET
  ENDP IiyVRNDSCALESD::
↑ VRNDSCALEPS
Round Packed Float32 Values To Include A Given Number Of Fraction Bits
Intel reference
VRNDSCALEPS xmm1 {k1}{z}, xmm2/m128/m32bcst, imm8 EVEX.128.66.0F3A.W0 08 /r ib
VRNDSCALEPS ymm1 {k1}{z}, ymm2/m256/m32bcst, imm8 EVEX.256.66.0F3A.W0 08 /r ib
VRNDSCALEPS zmm1 {k1}{z}, zmm2/m512/m32bcst{sae}, imm8 EVEX.512.66.0F3A.W0 08 /r ib
Opcode
0x08
Tested by
t5712
IiyVRNDSCALEPS:: PROC
    IiAllowModifier MASK
    IiAllowSuppressing Operand=DH
    IiAllowBroadcasting DWORD, Operand=DH
    IiEmitOpcode 0x08
    IiOpEn RM
    IiModRM /r
    IiEmitImm Operand3, BYTE
    IiDisp8EVEX FV32
    IiDispatchFormat  xmm.xmm.imm, xmm.mem.imm, ymm.ymm.imm, ymm.mem.imm, zmm.zmm.imm, zmm.mem.imm
.xmm.xmm.imm:
.xmm.mem.imm:
    IiEmitPrefix EVEX.128.66.0F3A.W0
    RET
.ymm.ymm.imm:
.ymm.mem.imm:
    IiEmitPrefix EVEX.256.66.0F3A.W0
    RET
.zmm.zmm.imm:
.zmm.mem.imm:
    IiEmitPrefix EVEX.512.66.0F3A.W0
    RET
  ENDP IiyVRNDSCALEPS::
↑ VRNDSCALEPD
Round Packed Float64 Values To Include A Given Number Of Fraction Bits
Intel reference
VRNDSCALEPD xmm1 {k1}{z}, xmm2/m128/m64bcst, imm8 EVEX.128.66.0F3A.W1 09 /r ib
VRNDSCALEPD ymm1 {k1}{z}, ymm2/m256/m64bcst, imm8 EVEX.256.66.0F3A.W1 09 /r ib
VRNDSCALEPD zmm1 {k1}{z}, zmm2/m512/m64bcst{sae}, imm8EVEX.512.66.0F3A.W1 09 /r ib
Opcode
0x09
Tested by
t5712
IiyVRNDSCALEPD:: PROC
    IiAllowModifier MASK
    IiAllowSuppressing Operand=DH 
    IiAllowBroadcasting QWORD, Operand=DH
    IiEmitOpcode 0x09
    IiOpEn RM
    IiModRM /r
    IiEmitImm Operand3, BYTE
    IiDisp8EVEX FV64
    IiDispatchFormat  xmm.xmm.imm, xmm.mem.imm, ymm.ymm.imm, ymm.mem.imm, zmm.zmm.imm, zmm.mem.imm
.xmm.xmm.imm:
.xmm.mem.imm:
    IiEmitPrefix EVEX.128.66.0F3A.W1
    RET
.ymm.ymm.imm:
.ymm.mem.imm:
    IiEmitPrefix EVEX.256.66.0F3A.W1
    RET
.zmm.zmm.imm:
.zmm.mem.imm:
    IiEmitPrefix EVEX.512.66.0F3A.W1
    RET
  ENDP IiyVRNDSCALEPD::
↑ VROUNDSS
Round Scalar Single-FP Values
Intel reference
VROUNDSS xmm1, xmm2, xmm3/m32, imm8 VEX.NDS.LIG.66.0F3A.WIG 0A /r ib
Category
sse41,simdfp,conver
Operands
Vss,Wss,Ib
Opcode
0x660F3A0A /r
CPU
C2++
Documented
D43
Tested by
t5714
IiyVROUNDSS:: PROC
    IiEmitOpcode 0x0A
    IiEncoding DATA=DWORD
.op:IiOpEn RVM
    IiModRM /r
    IiEmitImm Operand4, BYTE, Max=15
    IiDispatchFormat  xmm.xmm.xmm.imm, xmm.xmm.mem.imm
.xmm.xmm.xmm.imm:
.xmm.xmm.mem.imm:
    IiEmitPrefix VEX.NDS.LIG.66.0F3A.WIG
    RET
  ENDP IiyVROUNDSS::
↑ VROUNDSD
Round Scalar Double-FP Values
Intel reference
VROUNDSD xmm1, xmm2, xmm3/m64, imm8 VEX.NDS.LIG.66.0F3A.WIG 0B /r ib
Category
sse41,simdfp,conver
Operands
Vsd,Wsd,Ib
Opcode
0x660F3A0B /r
CPU
C2++
Documented
D43
Tested by
t5714
IiyVROUNDSD:: PROC
    IiEncoding DATA=QWORD
    IiEmitOpcode 0x0B
    JMP IiyVROUNDSS.op:
  ENDP IiyVROUNDSD::
↑ VROUNDPS
Round Packed Single-FP Values
Intel reference
VROUNDPS xmm1, xmm2/m128, imm8 VEX.128.66.0F3A.WIG 08 /r ib
VROUNDPS ymm1, ymm2/m256, imm8 VEX.256.66.0F3A.WIG 08 /r ib
Category
sse41,simdfp,conver
Operands
Vps,Wps,Ib
Opcode
0x660F3A08 /r
CPU
C2++
Documented
D43
Tested by
t5714
IiyVROUNDPS:: PROC
    IiEmitOpcode 0x08
.op:IiOpEn RM
    IiModRM /r
    IiEmitImm Operand3, BYTE, Max=15
    IiDispatchFormat  xmm.xmm.imm, xmm.mem.imm, ymm.ymm.imm, ymm.mem.imm
.xmm.xmm.imm:
.xmm.mem.imm:
    IiEmitPrefix VEX.128.66.0F3A.WIG
    RET
.ymm.ymm.imm:
.ymm.mem.imm:
    IiEmitPrefix VEX.256.66.0F3A.WIG
    RET
  ENDP IiyVROUNDPS::
↑ VROUNDPD
Round Packed Double-FP Values
Intel reference
VROUNDPD xmm1, xmm2/m128, imm8 VEX.128.66.0F3A.WIG 09 /r ib
VROUNDPD ymm1, ymm2/m256, imm8 VEX.256.66.0F3A.WIG 09 /r ib
Category
sse41,simdfp,conver
Operands
Vps,Wpd,Ib
Opcode
0x660F3A09 /r
CPU
C2++
Documented
D43
Tested by
t5714
IiyVROUNDPD:: PROC
    IiEmitOpcode 0x09
    JMP IiyVROUNDPS.op:
  ENDP IiyVROUNDPD::
↑ VPMADDUBSW
Multiply and Add Packed Signed and Unsigned Bytes
Intel reference
VPMADDUBSW xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F38 04 /r
VPMADDUBSW ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F38 04 /r
VPMADDUBSW xmm1 {k1}{z}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F38.WIG 04 /r
VPMADDUBSW ymm1 {k1}{z}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F38.WIG 04 /r
VPMADDUBSW zmm1 {k1}{z}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F38.WIG 04 /r
Category
ssse3,simdint
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0F3804 /r | 0x660F3804 /r
CPU
C2+
Tested by
t5722
IiyVPMADDUBSW:: PROC
    IiAllowModifier MASK
    IiEmitOpcode 0x04
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FVM
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F38, EVEX.NDS.128.66.0F38.WIG
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F38, EVEX.NDS.256.66.0F38.WIG
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.WIG
    RET
  ENDP IiyVPMADDUBSW::
↑ VPMADD52LUQ
Packed Multiply of Unsigned 52-bit Integers and Add the Low 52-bit Products to Qword Accumulators
Intel reference
VPMADD52LUQ xmm1 {k1}{z}, xmm2,xmm3/m128/m64bcst EVEX.DDS.128.66.0F38.W1 B4 /r
VPMADD52LUQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.DDS.256.66.0F38.W1 B4 /r
VPMADD52LUQ zmm1 {k1}{z}, zmm2,zmm3/m512/m64bcst EVEX.DDS.512.66.0F38.W1 B4 /r
Opcode
0xB4
Tested by
t5722
IiyVPMADD52LUQ:: PROC
    IiEmitOpcode 0xB4
.op:IiAllowModifier MASK
    IiAllowBroadcasting QWORD
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV64
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.DDS.128.66.0F38.W1
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix EVEX.DDS.256.66.0F38.W1
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.DDS.512.66.0F38.W1
    RET
  ENDP IiyVPMADD52LUQ::
↑ VPMADD52HUQ
Packed Multiply of Unsigned 52-bit Unsigned Integers and Add High 52-bit Products to 64-bit Accumulators
Intel reference
VPMADD52HUQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.DDS.128.66.0F38.W1 B5 /r
VPMADD52HUQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.DDS.256.66.0F38.W1 B5 /r
VPMADD52HUQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.DDS.512.66.0F38.W1 B5 /r
Opcode
0xB5
Tested by
t5722
IiyVPMADD52HUQ:: PROC
    IiEmitOpcode 0xB5
    JMP IiyVPMADD52LUQ.op:
  ENDP IiyVPMADD52HUQ::
↑ VPHADDW
Packed Horizontal Add
Intel reference
VPHADDW xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F38.WIG 01 /r
VPHADDW ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F38.WIG 01 /r
Category
ssse3,simdint
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0F3801 /r | 0x660F3801 /r
CPU
C2+
Tested by
t5724
IiyVPHADDW:: PROC
    IiEmitOpcode 0x01
    IiEncoding DATA=WORD
.op:IiOpEn RVM
    IiModRM /r
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F38.WIG
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F38.WIG
    RET
  ENDP IiyVPHADDW::
↑ VPHADDD
Packed Horizontal Add
Intel reference
VPHADDD xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F38.WIG 02 /r
VPHADDD ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F38.WIG 02 /r
Category
ssse3,simdint
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0F3802 /r | 0x660F3802 /r
CPU
C2+
Tested by
t5724
IiyVPHADDD:: PROC
    IiEncoding DATA=DWORD
    IiEmitOpcode 0x02
    JMP IiyVPHADDW.op:
  ENDP IiyVPHADDD::
↑ VPHADDSW
Packed Horizontal Add and Saturate
Intel reference
VPHADDSW xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F38.WIG 03 /r
VPHADDSW ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F38.WIG 03 /r
Category
ssse3,simdint
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0F3803 /r | 0x660F3803 /r
CPU
C2+
Tested by
t5724
IiyVPHADDSW:: PROC
    IiEncoding DATA=WORD
    IiEmitOpcode 0x03
    JMP IiyVPHADDW.op:
  ENDP IiyVPHADDSW::
↑ VPHSUBW
Packed Horizontal Subtract
Intel reference
VPHSUBW xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F38.WIG 05 /r
VPHSUBW ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F38.WIG 05 /r
Category
ssse3,simdint
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0F3805 /r | 0x660F3805 /r
CPU
C2+
Tested by
t5724
IiyVPHSUBW:: PROC
    IiEncoding DATA=WORD
    IiEmitOpcode 0x05
    JMP IiyVPHADDW.op:
  ENDP IiyVPHSUBW::
↑ VPHSUBD
Packed Horizontal Subtract
Intel reference
VPHSUBD xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F38.WIG 06 /r
VPHSUBD ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F38.WIG 06 /r
Category
ssse3,simdint
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0F3806 /r | 0x660F3806 /r
CPU
C2+
Tested by
t5724
IiyVPHSUBD:: PROC
    IiEncoding DATA=DWORD
    IiEmitOpcode 0x06
    JMP IiyVPHADDW.op:
  ENDP IiyVPHSUBD::
↑ VPHSUBSW
Packed Horizontal Subtract and Saturate
Intel reference
VPHSUBSW xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F38.WIG 07 /r
VPHSUBSW ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F38.WIG 07 /r
Category
ssse3,simdint
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0F3807 /r | 0x660F3807 /r
CPU
C2+
Tested by
t5724
IiyVPHSUBSW:: PROC
    IiEncoding DATA=WORD
    IiEmitOpcode 0x07
    JMP IiyVPHADDW.op:
  ENDP IiyVPHSUBSW::
↑ VPAND
Logical AND
Intel reference
VPAND xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F.WIG DB /r
VPAND ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F.WIG DB /r
Category
mmx,logical
Operands
Pq,Qd | Vdq,Wdq
Opcode
0x0FDB /r | 0x660FDB /r
CPU
PX+
Tested by
t5730
IiyVPAND:: PROC
    IiEmitOpcode 0xDB
.op:IiOpEn RVM
    IiModRM /r
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F.WIG
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F.WIG
    RET
  ENDP IiyVPAND::
↑ VPANDD
Bitwise AND Int32 Vectors
Intel reference
VPANDD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.66.0F.W0 DB /r
VPANDD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.66.0F.W0 DB /r
VPANDD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.66.0F.W0 DB /r
VPANDD zmm1 {k1}, zmm2, zmm3/m512/m32bcst MVEX.NDS.512.66.0F.W0 DB /r
Opcode
0xDB
Tested by
t5730
IiyVPANDD:: PROC
    IiEmitOpcode 0xDB
.op:IiAllowModifier MASK
    IiAllowBroadcasting DWORD
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV32
    IiDisp8MVEX Si32
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F.W0
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F.W0
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F.W0, MVEX.NDS.512.66.0F.W0
    RET
  ENDP IiyVPANDD::
↑ VPANDQ
Bitwise AND Int64 Vectors
Intel reference
VPANDQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F.W1 DB /r
VPANDQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F.W1 DB /r
VPANDQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F.W1 DB /r
VPANDQ zmm1 {k1}, zmm2, zmm3/m512/m64bcst MVEX.NDS.512.66.0F.W1 DB /r
Opcode
0xDB
Tested by
t5730
IiyVPANDQ:: PROC
    IiEmitOpcode 0xDB
.op:IiAllowModifier MASK
    IiAllowBroadcasting QWORD
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV64
    IiDisp8MVEX Ub64
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F.W1
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F.W1
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F.W1, MVEX.NDS.512.66.0F.W1
    RET
  ENDP IiyVPANDQ::
↑ VPOR
Bitwise Logical OR
Intel reference
VPOR xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F.WIG EB /r
VPOR ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F.WIG EB /r
Category
mmx,logical
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0FEB /r | 0x660FEB /r
CPU
PX+
Tested by
t5732
IiyVPOR:: PROC
    IiEmitOpcode 0xEB
    JMP IiyVPAND.op:
  ENDP IiyVPOR::
↑ VPORD
Bitwise OR Int32 Vectors
Intel reference
VPORD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.66.0F.W0 EB /r
VPORD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.66.0F.W0 EB /r
VPORD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.66.0F.W0 EB /r
VPORD zmm1 {k1}, zmm2, zmm3/m512/m32bcst MVEX.NDS.512.66.0F.W0 EB /r
Opcode
0xEB
Tested by
t5732
IiyVPORD:: PROC
    IiEmitOpcode 0xEB
    JMP IiyVPANDD.op:
  ENDP IiyVPORD::
↑ VPORQ
Bitwise OR Int64 Vectors
Intel reference
VPORQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F.W1 EB /r
VPORQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F.W1 EB /r
VPORQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F.W1 EB /r
VPORQ zmm1 {k1}, zmm2, zmm3/m512/m64bcst MVEX.NDS.512.66.0F.W1 EB /r
Opcode
0xEB
Tested by
t5732
IiyVPORQ:: PROC
    IiEmitOpcode 0xEB
    JMP IiyVPANDQ.op:
  ENDP IiyVPORQ::
↑ VPANDN
Logical AND NOT
Intel reference
VPANDN xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F.WIG DF /r
VPANDN ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F.WIG DF /r
Category
mmx,logical
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0FDF /r | 0x660FDF /r
CPU
PX+
Tested by
t5734
IiyVPANDN:: PROC
    IiEmitOpcode 0xDF
    JMP IiyVPAND.op:
  ENDP IiyVPANDN::
↑ VPANDND
Bitwise AND NOT Int32 Vectors
Intel reference
VPANDND xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.66.0F.W0 DF /r
VPANDND ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.66.0F.W0 DF /r
VPANDND zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.66.0F.W0 DF /r
VPANDND zmm1 {k1}, zmm2, zmm3/m512/m32bcst MVEX.NDS.512.66.0F.W0 DF /r
Opcode
0xDF
Tested by
t5734
IiyVPANDND:: PROC
    IiEmitOpcode 0xDF
    JMP IiyVPANDD.op:
  ENDP IiyVPANDND::
↑ VPANDNQ
Bitwise AND NOT Int64 Vectors
Intel reference
VPANDNQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F.W1 DF /r
VPANDNQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F.W1 DF /r
VPANDNQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F.W1 DF /r
VPANDNQ zmm1 {k1}, zmm2, zmm3/m512/m64bcst MVEX.NDS.512.66.0F.W1 DF /r
Opcode
0xDF
Tested by
t5734
IiyVPANDNQ:: PROC
    IiEmitOpcode 0xDF
    JMP IiyVPANDQ.op:
  ENDP IiyVPANDNQ::
↑ VPXOR
Logical Exclusive OR
Intel reference
VPXOR xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F.WIG EF /r
VPXOR ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F.WIG EF /r
Category
mmx,logical
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0FEF /r | 0x660FEF /r
CPU
PX+
Tested by
t5736
IiyVPXOR:: PROC
    IiEmitOpcode 0xEF
    JMP IiyVPAND.op:
  ENDP IiyVPXOR::
↑ VPXORD
Bitwise XOR Int32 Vectors
Intel reference
VPXORD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.66.0F.W0 EF /r
VPXORD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.66.0F.W0 EF /r
VPXORD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.66.0F.W0 EF /r
VPXORD zmm1 {k1}, zmm2, zmm3/m512/m32bcst MVEX.NDS.512.66.0F.W0 EF /r
Opcode
0xEF
Tested by
t5736
IiyVPXORD:: PROC
    IiEmitOpcode 0xEF
    JMP IiyVPANDD.op:
  ENDP IiyVPXORD::
↑ VPXORQ
Bitwise XOR Int64 Vectors
Intel reference
VPXORQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F.W1 EF /r
VPXORQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F.W1 EF /r
VPXORQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F.W1 EF /r
VPXORQ zmm1 {k1}, zmm2, zmm3/m512/m64bcst MVEX.NDS.512.66.0F.W1 EF /r
Opcode
0xEF
Tested by
t5736
IiyVPXORQ:: PROC
    IiEmitOpcode 0xEF
    JMP IiyVPANDQ.op:
  ENDP IiyVPXORQ::
↑ VRANGESS
Range Restriction Calculation From a Pair of Scalar Float32 Values
Intel reference
VRANGESS xmm1 {k1}{z}, xmm2, xmm3/m32{sae}, imm8 EVEX.NDS.LIG.66.0F3A.W0 51 /r ib
Opcode
0x51
Tested by
t5716
IiyVRANGESS:: PROC
    IiAllowModifier MASK
    IiAllowSuppressing Operand=DH, Register=xmm
    IiEmitOpcode 0x51
    IiOpEn RVM
    IiModRM /r
    IiEmitImm Operand4, BYTE, Max=15
    IiDisp8EVEX T1S32
    IiDispatchFormat  xmm.xmm.xmm.imm, xmm.xmm.mem.imm
.xmm.xmm.xmm.imm:
.xmm.xmm.mem.imm:
    IiEmitPrefix EVEX.NDS.LIG.66.0F3A.W0
    RET
  ENDP IiyVRANGESS::
↑ VRANGESD
Range Restriction Calculation From a pair of Scalar Float64 Values
Intel reference
VRANGESD xmm1 {k1}{z}, xmm2, xmm3/m64{sae}, imm8 EVEX.NDS.LIG.66.0F3A.W1 51 /r ib
Opcode
0x51
Tested by
t5716
IiyVRANGESD:: PROC
    IiAllowModifier MASK
    IiAllowSuppressing Operand=DH, Register=xmm
    IiEmitOpcode 0x51
    IiOpEn RVM
    IiModRM /r
    IiEmitImm Operand4, BYTE, Max=15
    IiDisp8EVEX T1S64
    IiDispatchFormat  xmm.xmm.xmm.imm, xmm.xmm.mem.imm
.xmm.xmm.xmm.imm:
.xmm.xmm.mem.imm:
    IiEmitPrefix EVEX.NDS.LIG.66.0F3A.W1
    RET
  ENDP IiyVRANGESD::
↑ VRANGEPS
Range Restriction Calculation For Packed Pairs of Float32 Values
Intel reference
VRANGEPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst, imm8 EVEX.NDS.128.66.0F3A.W0 50 /r ib
VRANGEPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst, imm8 EVEX.NDS.256.66.0F3A.W0 50 /r ib
VRANGEPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst{sae}, imm8 EVEX.NDS.512.66.0F3A.W0 50 /r ib
Opcode
0x50
Tested by
t5716
IiyVRANGEPS:: PROC
    IiAllowModifier MASK
    IiAllowSuppressing Operand=DH
    IiAllowBroadcasting DWORD, Operand=DH
    IiEmitOpcode 0x50
    IiOpEn RVM
    IiModRM /r
    IiEmitImm Operand4, BYTE, Max=15
    IiDisp8EVEX FV32
    IiDispatchFormat  xmm.xmm.xmm.imm, xmm.xmm.mem.imm, ymm.ymm.ymm.imm, ymm.ymm.mem.imm, zmm.zmm.zmm.imm, zmm.zmm.mem.imm
.xmm.xmm.xmm.imm:
.xmm.xmm.mem.imm:
    IiEmitPrefix EVEX.NDS.128.66.0F3A.W0
    RET
.ymm.ymm.ymm.imm:
.ymm.ymm.mem.imm:
    IiEmitPrefix EVEX.NDS.256.66.0F3A.W0
    RET
.zmm.zmm.zmm.imm:
.zmm.zmm.mem.imm:
    IiEmitPrefix EVEX.NDS.512.66.0F3A.W0
    RET
  ENDP IiyVRANGEPS::
↑ VRANGEPD
Range Restriction Calculation For Packed Pairs of Float64 Values
Intel reference
VRANGEPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst, imm8 EVEX.NDS.128.66.0F3A.W1 50 /r ib
VRANGEPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst, imm8 EVEX.NDS.256.66.0F3A.W1 50 /r ib
VRANGEPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{sae}, imm8 EVEX.NDS.512.66.0F3A.W1 50 /r ib
Opcode
0x50
Tested by
t5716
IiyVRANGEPD:: PROC
    IiAllowModifier MASK
    IiAllowSuppressing Operand=DH
    IiAllowBroadcasting QWORD, Operand=DH
    IiEmitOpcode 0x50
    IiOpEn RVM
    IiModRM /r
    IiEmitImm Operand4, BYTE, Max=15
    IiDisp8EVEX FV64
    IiDispatchFormat  xmm.xmm.xmm.imm, xmm.xmm.mem.imm, ymm.ymm.ymm.imm, ymm.ymm.mem.imm, zmm.zmm.zmm.imm, zmm.zmm.mem.imm
.xmm.xmm.xmm.imm:
.xmm.xmm.mem.imm:
    IiEmitPrefix EVEX.NDS.128.66.0F3A.W1
    RET
.ymm.ymm.ymm.imm:
.ymm.ymm.mem.imm:
    IiEmitPrefix EVEX.NDS.256.66.0F3A.W1
    RET
.zmm.zmm.zmm.imm:
.zmm.zmm.mem.imm:
    IiEmitPrefix EVEX.NDS.512.66.0F3A.W1
    RET
  ENDP IiyVRANGEPD::
↑ VREDUCESS
Perform a Reduction Transformation on a Scalar Float32 Value
Intel reference
VREDUCESS xmm1 {k1}{z}, xmm2, xmm3/m32{sae}, imm8 EVEX.NDS.LIG.66.0F3A.W0 57 /r /ib
Opcode
0x57
Tested by
t5718
IiyVREDUCESS:: PROC
    IiAllowModifier MASK
    IiAllowSuppressing Operand=DH, Register=xmm
    IiEmitOpcode 0x57
    IiOpEn RVM
    IiModRM /r
    IiEmitImm Operand4, BYTE
    IiDisp8EVEX T1S32
    IiDispatchFormat  xmm.xmm.xmm.imm, xmm.xmm.mem.imm
.xmm.xmm.xmm.imm:
.xmm.xmm.mem.imm:
    IiEmitPrefix EVEX.NDS.LIG.66.0F3A.W0
    RET
  ENDP IiyVREDUCESS::
↑ VREDUCESD
Perform a Reduction Transformation on a Scalar Float64 Value
Intel reference
VREDUCESD xmm1 {k1}{z}, xmm2, xmm3/m64{sae}, imm8 EVEX.NDS.LIG.66.0F3A.W1 57 /r imm
Opcode
0x57
Tested by
t5718
IiyVREDUCESD:: PROC
    IiAllowModifier MASK
    IiAllowSuppressing Operand=DH, Register=xmm
    IiEmitOpcode 0x57
    IiOpEn RVM
    IiModRM /r
    IiEmitImm Operand4, BYTE
    IiDisp8EVEX FV64
    IiDispatchFormat  xmm.xmm.xmm.imm, xmm.xmm.mem.imm
.xmm.xmm.xmm.imm:
.xmm.xmm.mem.imm:
    IiEmitPrefix EVEX.NDS.LIG.66.0F3A.W1
    RET
  ENDP IiyVREDUCESD::
↑ VREDUCEPS
Perform Reduction Transformation on Packed Float32 Values
Intel reference
VREDUCEPS xmm1 {k1}{z}, xmm2/m128/m32bcst, imm8 EVEX.128.66.0F3A.W0 56 /r ib
VREDUCEPS ymm1 {k1}{z}, ymm2/m256/m32bcst, imm8 EVEX.256.66.0F3A.W0 56 /r ib
VREDUCEPS zmm1 {k1}{z}, zmm2/m512/m32bcst{sae}, imm8 EVEX.512.66.0F3A.W0 56 /r ib
Opcode
0x56
Tested by
t5718
IiyVREDUCEPS:: PROC
    IiAllowModifier MASK
    IiAllowSuppressing Operand=DH
    IiAllowBroadcasting DWORD, Operand=DH
    IiEmitOpcode 0x56
    IiOpEn RM
    IiModRM /r
    IiEmitImm Operand3, BYTE
    IiDisp8EVEX FV32
    IiDispatchFormat  xmm.xmm.imm, xmm.mem.imm, ymm.ymm.imm, ymm.mem.imm, zmm.zmm.imm, zmm.mem.imm
.xmm.xmm.imm:
.xmm.mem.imm:
    IiEmitPrefix EVEX.128.66.0F3A.W0
    RET
.ymm.ymm.imm:
.ymm.mem.imm:
    IiEmitPrefix EVEX.256.66.0F3A.W0
    RET
.zmm.zmm.imm:
.zmm.mem.imm:
    IiEmitPrefix EVEX.512.66.0F3A.W0
    RET
  ENDP IiyVREDUCEPS::
↑ VREDUCEPD
Perform Reduction Transformation on Packed Float64 Values
Intel reference
VREDUCEPD xmm1 {k1}{z}, xmm2/m128/m64bcst, imm8 EVEX.128.66.0F3A.W1 56 /r ib
VREDUCEPD ymm1 {k1}{z}, ymm2/m256/m64bcst, imm8 EVEX.256.66.0F3A.W1 56 /r ib
VREDUCEPD zmm1 {k1}{z}, zmm2/m512/m64bcst{sae}, imm8 EVEX.512.66.0F3A.W1 56 /r ib
Opcode
0x56
Tested by
t5718
IiyVREDUCEPD:: PROC
    IiAllowModifier MASK
    IiAllowSuppressing Operand=DH
    IiAllowBroadcasting QWORD, Operand=DH
    IiEmitOpcode 0x56
    IiOpEn RM
    IiModRM /r
    IiEmitImm Operand3, BYTE
    IiDisp8EVEX FV64
    IiDispatchFormat  xmm.xmm.imm, xmm.mem.imm, ymm.ymm.imm, ymm.mem.imm, zmm.zmm.imm, zmm.mem.imm
.xmm.xmm.imm:
.xmm.mem.imm:
    IiEmitPrefix EVEX.128.66.0F3A.W1
    RET
.ymm.ymm.imm:
.ymm.mem.imm:
    IiEmitPrefix EVEX.256.66.0F3A.W1
    RET
.zmm.zmm.imm:
.zmm.mem.imm:
    IiEmitPrefix EVEX.512.66.0F3A.W1
    RET
  ENDP IiyVREDUCEPD::
↑ VPRORVD
Bit Rotate DWORDS Right
Intel reference
VPRORVD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.66.0F38.W0 14 /r
VPRORVD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.66.0F38.W0 14 /r
VPRORVD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.66.0F38.W0 14 /r
Opcode
0x14
Tested by
t5628
IiyVPRORVD:: PROC
    IiEmitOpcode 0x14
.op:IiAllowModifier MASK
    IiAllowBroadcasting DWORD
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV32
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F38.W0
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F38.W0
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W0
    RET
  ENDP IiyVPRORVD::
↑ VPROLVD
Bit Rotate DWORDS Left
Intel reference
VPROLVD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.66.0F38.W0 15 /r
VPROLVD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.66.0F38.W0 15 /r
VPROLVD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.66.0F38.W0 15 /r
Opcode
0x15
Tested by
t5628
IiyVPROLVD:: PROC
    IiEmitOpcode 0x15
    JMP IiyVPRORVD.op:
  ENDP IiyVPROLVD::
↑ VPRORVQ
Bit Rotate QWORDS Right
Intel reference
VPRORVQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F38.W1 14 /r
VPRORVQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F38.W1 14 /r
VPRORVQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F38.W1 14 /r
Opcode
0x14
Tested by
t5628
IiyVPRORVQ:: PROC
    IiEmitOpcode 0x14
.op:IiAllowModifier MASK
    IiAllowBroadcasting QWORD
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV64
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F38.W1
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F38.W1
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W1
    RET
  ENDP IiyVPRORVQ::
↑ VPROLVQ
Bit Rotate QWORDS Left
Intel reference
VPROLVQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F38.W1 15 /r
VPROLVQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F38.W1 15 /r
VPROLVQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F38.W1 15 /r
Opcode
0x15
Tested by
t5628
IiyVPROLVQ:: PROC
    IiEmitOpcode 0x15
    JMP IiyVPRORVQ.op:
  ENDP IiyVPROLVQ::
↑ VPRORD
Bit Rotate DWORDS Right
Intel reference
VPRORD xmm1 {k1}{z}, xmm2/m128/m32bcst, imm8 EVEX.NDD.128.66.0F.W0 72 /0 ib
VPRORD ymm1 {k1}{z}, ymm2/m256/m32bcst, imm8 EVEX.NDD.256.66.0F.W0 72 /0 ib
VPRORD zmm1 {k1}{z}, zmm2/m512/m32bcst, imm8 EVEX.NDD.512.66.0F.W0 72 /0 ib
Opcode
0x72
Tested by
t5630
IiyVPRORD:: PROC
    IiModRM /0
.di:IiAllowModifier MASK
    IiAllowBroadcasting DWORD, Operand=DH
    IiEmitOpcode 0x72
    IiOpEn VM
    IiEmitImm Operand3, BYTE
    IiDisp8EVEX FV32
    IiDispatchFormat  xmm.xmm.imm, xmm.mem.imm, ymm.ymm.imm, ymm.mem.imm, zmm.zmm.imm, zmm.mem.imm
.xmm.xmm.imm:
.xmm.mem.imm:
    IiEmitPrefix EVEX.NDD.128.66.0F.W0
    RET
.ymm.ymm.imm:
.ymm.mem.imm:
    IiEmitPrefix EVEX.NDD.256.66.0F.W0
    RET
.zmm.zmm.imm:
.zmm.mem.imm:
    IiEmitPrefix EVEX.NDD.512.66.0F.W0
    RET
  ENDP IiyVPRORD::
↑ VPROLD
Bit Rotate DWORDS Left
Intel reference
VPROLD xmm1 {k1}{z}, xmm2/m128/m32bcst, imm8 EVEX.NDD.128.66.0F.W0 72 /1 ib
VPROLD ymm1 {k1}{z}, ymm2/m256/m32bcst, imm8 EVEX.NDD.256.66.0F.W0 72 /1 ib
VPROLD zmm1 {k1}{z}, zmm2/m512/m32bcst, imm8 EVEX.NDD.512.66.0F.W0 72 /1 ib
Opcode
0x72
Tested by
t5630
IiyVPROLD:: PROC
    IiModRM /1
    JMP IiyVPRORD.di:
  ENDP IiyVPROLD::
↑ VPRORQ
Bit Rotate QWORDS Right
Intel reference
VPRORQ xmm1 {k1}{z}, xmm2/m128/m64bcst, imm8 EVEX.NDD.128.66.0F.W1 72 /0 ib
VPRORQ ymm1 {k1}{z}, ymm2/m256/m64bcst, imm8 EVEX.NDD.256.66.0F.W1 72 /0 ib
VPRORQ zmm1 {k1}{z}, zmm2/m512/m64bcst, imm8 EVEX.NDD.512.66.0F.W1 72 /0 ib
Opcode
0x72
Tested by
t5630
IiyVPRORQ:: PROC
    IiModRM /0
.di:IiAllowModifier MASK
    IiAllowBroadcasting QWORD, Operand=DH
    IiEmitOpcode 0x72
    IiOpEn VM
    IiEmitImm Operand3, BYTE
    IiDisp8EVEX FV64
    IiDispatchFormat  xmm.xmm.imm, xmm.mem.imm, ymm.ymm.imm, ymm.mem.imm, zmm.zmm.imm, zmm.mem.imm
.xmm.xmm.imm:
.xmm.mem.imm:
    IiEmitPrefix EVEX.NDD.128.66.0F.W1
    RET
.ymm.ymm.imm:
.ymm.mem.imm:
    IiEmitPrefix EVEX.NDD.256.66.0F.W1
    RET
.zmm.zmm.imm:
.zmm.mem.imm:
    IiEmitPrefix EVEX.NDD.512.66.0F.W1
    RET
  ENDP IiyVPRORQ::
↑ VPROLQ
Bit Rotate QWORDS Left
Intel reference
VPROLQ xmm1 {k1}{z}, xmm2/m128/m64bcst, imm8 EVEX.NDD.128.66.0F.W1 72 /1 ib
VPROLQ ymm1 {k1}{z}, ymm2/m256/m64bcst, imm8 EVEX.NDD.256.66.0F.W1 72 /1 ib
VPROLQ zmm1 {k1}{z}, zmm2/m512/m64bcst, imm8 EVEX.NDD.512.66.0F.W1 72 /1 ib
Opcode
0x72
Tested by
t5630
IiyVPROLQ:: PROC
    IiModRM /1
    JMP IiyVPRORQ.di:
  ENDP IiyVPROLQ::
↑ VPERMI2B
Full Permute of BYTEs From Two Tables Overwriting the Index
Intel reference
VPERMI2B xmm1 {k1}{z}, xmm2, xmm3/m128 EVEX.DDS.128.66.0F38.W0 75 /r
VPERMI2B ymm1 {k1}{z}, ymm2, ymm3/m256 EVEX.DDS.256.66.0F38.W0 75 /r
VPERMI2B zmm1 {k1}{z}, zmm2, zmm3/m512 EVEX.DDS.512.66.0F38.W0 75 /r
Opcode
0x75
Tested by
t5740
IiyVPERMI2B:: PROC
    IiEmitOpcode 0x75
.op:IiEncoding DATA=BYTE
    IiAllowModifier MASK
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FVM
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.DDS.128.66.0F38.W0
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix EVEX.DDS.256.66.0F38.W0
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.DDS.512.66.0F38.W0
    RET
  ENDP IiyVPERMI2B::
↑ VPERMI2W
Full Permute WORDs From Two Tables Overwriting the Index
Intel reference
VPERMI2W xmm1 {k1}{z}, xmm2, xmm3/m128 EVEX.DDS.128.66.0F38.W1 75 /r
VPERMI2W ymm1 {k1}{z}, ymm2, ymm3/m256 EVEX.DDS.256.66.0F38.W1 75 /r
VPERMI2W zmm1 {k1}{z}, zmm2, zmm3/m512 EVEX.DDS.512.66.0F38.W1 75 /r
Opcode
0x75
Tested by
t5740
IiyVPERMI2W:: PROC
    IiEmitOpcode 0x75
.op:IiEncoding DATA=WORD
    IiAllowModifier MASK
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FVM
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.DDS.128.66.0F38.W1
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix EVEX.DDS.256.66.0F38.W1
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.DDS.512.66.0F38.W1
    RET
  ENDP IiyVPERMI2W::
↑ VPERMI2D
Full Permute DWORDs From Two Tables Overwriting the Index
Intel reference
VPERMI2D xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.DDS.128.66.0F38.W0 76 /r
VPERMI2D ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.DDS.256.66.0F38.W0 76 /r
VPERMI2D zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.DDS.512.66.0F38.W0 76 /r
Opcode
0x76
Tested by
t5740
IiyVPERMI2D:: PROC
    IiEmitOpcode 0x76
.op:IiAllowModifier MASK
    IiAllowBroadcasting DWORD
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV32
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.DDS.128.66.0F38.W0
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix EVEX.DDS.256.66.0F38.W0
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.DDS.512.66.0F38.W0
    RET
  ENDP IiyVPERMI2D::
↑ VPERMI2Q
Full Permute of QWORDS From Two Tables Overwriting the Index
Intel reference
VPERMI2Q xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.DDS.128.66.0F38.W1 76 /r
VPERMI2Q ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.DDS.256.66.0F38.W1 76 /r
VPERMI2Q zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.DDS.512.66.0F38.W1 76 /r
Opcode
0x76
Tested by
t5740
IiyVPERMI2Q:: PROC
    IiEmitOpcode 0x76
.op:IiAllowModifier MASK
    IiAllowBroadcasting QWORD
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV64
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.DDS.128.66.0F38.W1
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix EVEX.DDS.256.66.0F38.W1
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.DDS.512.66.0F38.W1
    RET
  ENDP IiyVPERMI2Q::
↑ VPERMI2PS
Full Permute of single-precision FP From Two Tables Overwriting the Index
Intel reference
VPERMI2PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.DDS.128.66.0F38.W0 77 /r
VPERMI2PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.DDS.256.66.0F38.W0 77 /r
VPERMI2PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.DDS.512.66.0F38.W0 77 /r
Opcode
0x77
Tested by
t5740
IiyVPERMI2PS:: PROC
    IiEmitOpcode 0x77
    JMP IiyVPERMI2D.op:
  ENDP IiyVPERMI2PS::
↑ VPERMI2PD
Full Permute of double-precission FP From Two Tables Overwriting the Index
Intel reference
VPERMI2PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.DDS.128.66.0F38.W1 77 /r
VPERMI2PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.DDS.256.66.0F38.W1 77 /r
VPERMI2PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.DDS.512.66.0F38.W1 77 /r
Opcode
0x77
Tested by
t5740
IiyVPERMI2PD:: PROC
    IiEmitOpcode 0x77
    JMP IiyVPERMI2Q.op:
  ENDP IiyVPERMI2PD::
↑ VPERMT2B
Full Permute BYTEs From Two Tables Overwriting a Table
Intel reference
VPERMT2B xmm1 {k1}{z}, xmm2, xmm3/m128 EVEX.DDS.128.66.0F38.W0 7D /r
VPERMT2B ymm1 {k1}{z}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F38.W0 7D /r
VPERMT2B zmm1 {k1}{z}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F38.W0 7D /r
Opcode
0x7D
Tested by
t5742
IiyVPERMT2B:: PROC
    IiEmitOpcode 0x7D
    JMP IiyVPERMI2B.op:
  ENDP IiyVPERMT2B::
↑ VPERMT2W
Full Permute WORDs from Two Tables Overwriting one Table
Intel reference
VPERMT2W xmm1 {k1}{z}, xmm2, xmm3/m128 EVEX.DDS.128.66.0F38.W1 7D /r
VPERMT2W ymm1 {k1}{z}, ymm2, ymm3/m256 EVEX.DDS.256.66.0F38.W1 7D /r
VPERMT2W zmm1 {k1}{z}, zmm2, zmm3/m512 EVEX.DDS.512.66.0F38.W1 7D /r
Opcode
0x7D
Tested by
t5742
IiyVPERMT2W:: PROC
    IiEmitOpcode 0x7D
    JMP IiyVPERMI2W.op:
  ENDP IiyVPERMT2W::
↑ VPERMT2D
Full Permute DWORDs from Two Tables Overwriting one Table
Intel reference
VPERMT2D xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.DDS.128.66.0F38.W0 7E /r
VPERMT2D ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.DDS.256.66.0F38.W0 7E /r
VPERMT2D zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.DDS.512.66.0F38.W0 7E /r
Opcode
0x7E
Tested by
t5742
IiyVPERMT2D:: PROC
    IiEmitOpcode 0x7E
    JMP IiyVPERMI2D.op:
  ENDP IiyVPERMT2D::
↑ VPERMT2Q
Full Permute QWORDs from Two Tables Overwriting one Table
Intel reference
VPERMT2Q xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.DDS.128.66.0F38.W1 7E /r
VPERMT2Q ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.DDS.256.66.0F38.W1 7E /r
VPERMT2Q zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.DDS.512.66.0F38.W1 7E /r
Opcode
0x7E
Tested by
t5742
IiyVPERMT2Q:: PROC
    IiEmitOpcode 0x7E
    JMP IiyVPERMI2Q.op:
  ENDP IiyVPERMT2Q::
↑ VPERMT2PS
Full Permute single-precission FP from Two Tables Overwriting one Table
Intel reference
VPERMT2PS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.DDS.128.66.0F38.W0 7F /r
VPERMT2PS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.DDS.256.66.0F38.W0 7F /r
VPERMT2PS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.DDS.512.66.0F38.W0 7F /r
Opcode
0x7F
Tested by
t5742
IiyVPERMT2PS:: PROC
    IiEmitOpcode 0x7F
    JMP IiyVPERMI2D.op:
  ENDP IiyVPERMT2PS::
↑ VPERMT2PD
Full Permute double-precission FP from Two Tables Overwriting one Table
Intel reference
VPERMT2PD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.DDS.128.66.0F38.W1 7F /r
VPERMT2PD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.DDS.256.66.0F38.W1 7F /r
VPERMT2PD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.DDS.512.66.0F38.W1 7F /r
Opcode
0x7F
Tested by
t5742
IiyVPERMT2PD:: PROC
    IiEmitOpcode 0x7F
    JMP IiyVPERMI2Q.op:
  ENDP IiyVPERMT2PD::
↑ VPERMB
Permute Packed Bytes Elements
Intel reference
VPERMB xmm1 {k1}{z}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F38.W0 8D /r
VPERMB ymm1 {k1}{z}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F38.W0 8D /r
VPERMB zmm1 {k1}{z}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F38.W0 8D /r
Opcode
0x8D
Tested by
t5744
IiyVPERMB:: PROC
    IiAllowModifier MASK
    IiEmitOpcode 0x8D
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FVM
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F38.W0
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F38.W0
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W0
    RET
  ENDP IiyVPERMB::
↑ VPERMW
Permute Packed Words Elements
Intel reference
VPERMW xmm1 {k1}{z}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F38.W1 8D /r
VPERMW ymm1 {k1}{z}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F38.W1 8D /r
VPERMW zmm1 {k1}{z}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F38.W1 8D /r
Opcode
0x8D
Tested by
t5744
IiyVPERMW:: PROC
    IiAllowModifier MASK
    IiEmitOpcode 0x8D
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FVM
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F38.W1
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F38.W1
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W1
    RET
  ENDP IiyVPERMW::
↑ VPERMD
Permute Packed Doublewords Elements
Description
VPERMD
Intel reference
VPERMD ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F38.W0 36 /r
VPERMD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.66.0F38.W0 36 /r
VPERMD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.66.0F38.W0 36 /r
VPERMD zmm1 {k1}, zmm2, zmm3/m512 MVEX.NDS.512.66.0F38.W0 36 /r
Opcode
0x36
Tested by
t5744
IiyVPERMD:: PROC
    IiAllowModifier MASK
    IiAllowBroadcasting DWORD
    IiEmitOpcode 0x36
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV32
    IiDisp8MVEX Di64
    IiDispatchFormat  ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F38.W0, EVEX.NDS.256.66.0F38.W0
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W0, MVEX.NDS.512.66.0F38.W0
    RET
  ENDP IiyVPERMD::
↑ VPERMQ
Qwords Element Permutation
Description
VPERMQ
Intel reference
VPERMQ ymm1, ymm2/m256, imm8 VEX.256.66.0F3A.W1 00 /r ib
VPERMQ ymm1 {k1}{z}, ymm2/m256/m64bcst, imm8 EVEX.256.66.0F3A.W1 00 /r ib
VPERMQ zmm1 {k1}{z}, zmm2/m512/m64bcst, imm8 EVEX.512.66.0F3A.W1 00 /r ib
VPERMQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F38.W1 36 /r
VPERMQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F38.W1 36 /r
Opcode
0x00 | 0x36
Tested by
t5744
IiyVPERMQ:: PROC
    MOV AL,0x00
    MOV BL,0x36
.op:IiAllowModifier MASK
    IiAllowBroadcasting QWORD
    IiModRM /r
    IiDisp8EVEX FV64
    CMP DL,imm
    JE .I:
    IiAllowBroadcasting QWORD
    IiOpEn RVM
    IiEmitOpcode EBX
    IiDispatchFormat  ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.ymm.ymm.ymm:
    IiEmitPrefix EVEX.NDS.256.66.0F38.W1
    RET
.ymm.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F38.W1
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W1
    RET
.I: IiAllowBroadcasting QWORD, Operand=DH
    IiOpEn RM
    IiEmitOpcode EAX
    IiEmitImm Operand3, BYTE
    IiDispatchFormat  ymm.ymm.imm, ymm.mem.imm, zmm.zmm.imm, zmm.mem.imm
.ymm.ymm.imm:
.ymm.mem.imm:
    IiEmitPrefix VEX.256.66.0F3A.W1, EVEX.256.66.0F3A.W1
    RET
.zmm.zmm.imm:
.zmm.mem.imm:
    IiEmitPrefix EVEX.512.66.0F3A.W1
    RET
  ENDP IiyVPERMQ::
↑ VPERMPS
Permute Single-Precision Floating-Point Elements
Description
VPERMPS
Intel reference
VPERMPS ymm1, ymm2, ymm3/m256 VEX.256.66.0F38.W0 16 /r
VPERMPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.66.0F38.W0 16 /r
VPERMPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.66.0F38.W0 16 /r
Opcode
0X16
Tested by
t5746
IiyVPERMPS:: PROC
    IiAllowModifier MASK
    IiAllowBroadcasting DWORD
    IiEmitOpcode 0x16
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV32
    IiDispatchFormat  ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.256.66.0F38.W0, EVEX.NDS.256.66.0F38.W0
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W0
    RET
  ENDP IiyVPERMPS::
↑ VPERMPD
Permute Double-Precision Floating-Point Elements
Description
VPERMPD
Intel reference
VPERMPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F38.W1 16 /r
VPERMPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F38.W1 16 /r
VPERMPD ymm1, ymm2/m256, imm8 VEX.256.66.0F3A.W1 01 /r ib
VPERMPD ymm1 {k1}{z}, ymm2/m256/m64bcst, imm8 EVEX.256.66.0F3A.W1 01 /r ib
VPERMPD zmm1 {k1}{z}, zmm2/m512/m64bcst, imm8 EVEX.512.66.0F3A.W1 01 /r ib
Opcode
0X01 | 0X16
Tested by
t5746
IiyVPERMPD:: PROC
    MOV AL,0x01
    MOV BL,0x16
    JMP IiyVPERMQ.op:
  ENDP IiyVPERMPD::
↑ VPERMILPS
Permute In-Lane of Quadruples of Single-Precision Floating-Point Values
Description
VPERMILPS
Intel reference
VPERMILPS xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F38.W0 0C /r
VPERMILPS ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F38.W0 0C /r
VPERMILPS xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.66.0F38.W0 0C /r
VPERMILPS ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.66.0F38.W0 0C /r
VPERMILPS zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.66.0F38.W0 0C /r
VPERMILPS xmm1, xmm2/m128, imm8 VEX.128.66.0F3A.W0 04 /r ib
VPERMILPS ymm1, ymm2/m256, imm8 VEX.256.66.0F3A.W0 04 /r ib
VPERMILPS xmm1 {k1}{z}, xmm2/m128/m32bcst, imm8 EVEX.128.66.0F3A.W0 04 /r ib
VPERMILPS ymm1 {k1}{z}, ymm2/m256/m32bcst, imm8 EVEX.256.66.0F3A.W0 04 /r ib
VPERMILPS zmm1 {k1}{z}, zmm2/m512/m32bcst, imm8 EVEX.512.66.0F3A.W0 04 /r ib
Opcode
0x0C | 0x04
Tested by
t5746
IiyVPERMILPS:: PROC
    IiAllowModifier MASK
    IiModRM /r
    IiDisp8EVEX FV32
    CMP DL,imm
    JE .I:
    IiAllowBroadcasting DWORD
    IiOpEn RVM
    IiEmitOpcode 0x0C
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F38.W0, EVEX.NDS.128.66.0F38.W0
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F38.W0, EVEX.NDS.256.66.0F38.W0
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W0
    RET
.I: IiAllowBroadcasting DWORD, Operand=DH
    IiOpEn RM
    IiEmitOpcode 0x04
    IiEmitImm Operand3, BYTE
    IiDispatchFormat xmm.xmm.imm, xmm.mem.imm, ymm.ymm.imm, ymm.mem.imm, zmm.zmm.imm, zmm.mem.imm
.xmm.xmm.imm:
.xmm.mem.imm:
    IiEmitPrefix VEX.128.66.0F3A.W0, EVEX.128.66.0F3A.W0
    RET
.ymm.ymm.imm:
.ymm.mem.imm:
    IiEmitPrefix VEX.256.66.0F3A.W0, EVEX.256.66.0F3A.W0
    RET
.zmm.zmm.imm:
.zmm.mem.imm:
    IiEmitPrefix EVEX.512.66.0F3A.W0
    RET
  ENDP IiyVPERMILPS::
↑ VPERMILPD
Permute In-Lane of Pairs of Double-Precision Floating-Point Values
Description
VPERMILPD
Intel reference
VPERMILPD xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F38.W0 0D /r
VPERMILPD ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F38.W0 0D /r
VPERMILPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F38.W1 0D /r
VPERMILPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F38.W1 0D /r
VPERMILPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F38.W1 0D /r
VPERMILPD xmm1, xmm2/m128, imm8 VEX.128.66.0F3A.W0 05 /r ib
VPERMILPD ymm1, ymm2/m256, imm8 VEX.256.66.0F3A.W0 05 /r ib
VPERMILPD xmm1 {k1}{z}, xmm2/m128/m64bcst, imm8 EVEX.128.66.0F3A.W1 05 /r ib
VPERMILPD ymm1 {k1}{z}, ymm2/m256/m64bcst, imm8 EVEX.256.66.0F3A.W1 05 /r ib
VPERMILPD zmm1 {k1}{z}, zmm2/m512/m64bcst, imm8 EVEX.512.66.0F3A.W1 05 /r ib
Opcode
0x0D | 0x05
Tested by
t5746
IiyVPERMILPD:: PROC
    IiAllowModifier MASK
    IiModRM /r
    IiDisp8EVEX FV64
    CMP DL,imm
    JE .I:
    IiAllowBroadcasting QWORD
    IiOpEn RVM
    IiEmitOpcode 0x0D
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F38.W0, EVEX.NDS.128.66.0F38.W1
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F38.W0, EVEX.NDS.256.66.0F38.W1
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W1
    RET
.I: IiAllowBroadcasting QWORD, Operand=DH
    IiOpEn RM
    IiEmitOpcode 0x05
    IiEmitImm Operand3, BYTE
    IiDispatchFormat xmm.xmm.imm, xmm.mem.imm, ymm.ymm.imm, ymm.mem.imm, zmm.zmm.imm, zmm.mem.imm
.xmm.xmm.imm:
.xmm.mem.imm:
    IiEmitPrefix VEX.128.66.0F3A.W0, EVEX.128.66.0F3A.W1
    RET
.ymm.ymm.imm:
.ymm.mem.imm:
    IiEmitPrefix VEX.256.66.0F3A.W0, EVEX.256.66.0F3A.W1
    RET
.zmm.zmm.imm:
.zmm.mem.imm:
    IiEmitPrefix EVEX.512.66.0F3A.W1
    RET
  ENDP IiyVPERMILPD::
↑ VPERM2F128
Permute Floating-Point Values
Description
VPERM2F128
Intel reference
VPERM2F128 ymm1, ymm2, ymm3/m256, imm8 VEX.NDS.256.66.0F3A.W0 06 /r ib
Operands
0x06
Tested by
t5748
IiyVPERM2F128:: PROC
    IiEmitOpcode 0x06
.op:IiOpEn RVM
    IiModRM /r
    IiEmitImm Operand4, BYTE
    IiDispatchFormat  ymm.ymm.ymm.imm, ymm.ymm.mem.imm
.ymm.ymm.ymm.imm:
.ymm.ymm.mem.imm:
    IiEmitPrefix VEX.NDS.256.66.0F3A.W0
    RET
  ENDP IiyVPERM2F128::
↑ VPERM2I128
Permute Integer Values
Description
VPERM2I128
Intel reference
VPERM2I128 ymm1, ymm2, ymm3/m256, imm8 VEX.NDS.256.66.0F3A.W0 46 /r ib
Opcode
0x46
Tested by
t5748
IiyVPERM2I128:: PROC
    IiEmitOpcode 0x46
    JMP IiyVPERM2F128.op:
  ENDP IiyVPERM2I128::
↑ VPTESTMB
Logical AND and Set Mask
Intel reference
VPTESTMB k2 {k1}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F38.W0 26 /r F
VPTESTMB k2 {k1}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F38.W0 26 /r F
VPTESTMB k2 {k1}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F38.W0 26 /r F
Operands
0x26
Tested by
t5760
IiyVPTESTMB:: PROC
    IiEncoding DATA=BYTE
    IiDisp8EVEX FVM
    IiEmitOpcode 0x26
    IiAllowMaskMerging
    IiOpEn RVM
    IiModRM /r
    IiDispatchFormat  krg.xmm.xmm, krg.xmm.mem, krg.ymm.ymm, krg.ymm.mem, krg.zmm.zmm, krg.zmm.mem
.krg.xmm.xmm:
.krg.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F38.W0
    RET
.krg.ymm.ymm:
.krg.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F38.W0
    RET
.krg.zmm.zmm:
.krg.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W0
    RET
  ENDP IiyVPTESTMB::
↑ VPTESTMW
Logical AND and Set Mask
Intel reference
VPTESTMW k2 {k1}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F38.W1 26 /r F
VPTESTMW k2 {k1}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F38.W1 26 /r F
VPTESTMW k2 {k1}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F38.W1 26 /r F
Opcode
0x26
Tested by
t5760
IiyVPTESTMW:: PROC
    IiEncoding DATA=WORD
    IiDisp8EVEX FVM
    IiEmitOpcode 0x26
.op:IiAllowMaskMerging
    IiOpEn RVM
    IiModRM /r
    IiDispatchFormat  krg.xmm.xmm, krg.xmm.mem, krg.ymm.ymm, krg.ymm.mem, krg.zmm.zmm, krg.zmm.mem
.krg.xmm.xmm:
.krg.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F38.W1
    RET
.krg.ymm.ymm:
.krg.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F38.W1
    RET
.krg.zmm.zmm:
.krg.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W1
    RET
  ENDP IiyVPTESTMW::
↑ VPTESTMD
Logical AND and Set Mask
Intel reference
VPTESTMD k2 {k1}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.66.0F38.W0 27 /r
VPTESTMD k2 {k1}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.66.0F38.W0 27 /r
VPTESTMD k2 {k1}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.66.0F38.W0 27 /r
VPTESTMD k2 {k1}, zmm2, zmm3/m512/m32bcst MVEX.NDS.512.66.0F38.W0 27 /r
Opcode
0x27
Tested by
t5760
IiyVPTESTMD:: PROC
    IiAllowBroadcasting DWORD
    IiDisp8EVEX FV32
    IiDisp8MVEX Si32
    IiEmitOpcode 0x27
    IiAllowMaskMerging
    IiOpEn RVM
    IiModRM /r
    IiDispatchFormat  krg.xmm.xmm, krg.xmm.mem, krg.ymm.ymm, krg.ymm.mem, krg.zmm.zmm, krg.zmm.mem
.krg.xmm.xmm:
.krg.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F38.W0
    RET
.krg.ymm.ymm:
.krg.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F38.W0
    RET
.krg.zmm.zmm:
.krg.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W0, MVEX.NDS.512.66.0F38.W0
    RET
  ENDP IiyVPTESTMD::
↑ VPTESTMQ
Logical AND and Set Mask
Intel reference
VPTESTMQ k2 {k1}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F38.W1 27 /r F
VPTESTMQ k2 {k1}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F38.W1 27 /r F
VPTESTMQ k2 {k1}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F38.W1 27 /r F
Opcode
0x27
Tested by
t5760
IiyVPTESTMQ:: PROC
    IiAllowBroadcasting QWORD
    IiDisp8EVEX FV64
    IiEmitOpcode 0x27
    JMP IiyVPTESTMW.op:
  ENDP IiyVPTESTMQ::
↑ VPTESTNMB
Logical NAND and Set
Intel reference
VPTESTNMB k2 {k1}, xmm2, xmm3/m128 EVEX.NDS.128.F3.0F38.W0 26 /r
VPTESTNMB k2 {k1}, ymm2, ymm3/m256 EVEX.NDS.256.F3.0F38.W0 26 /r
VPTESTNMB k2 {k1}, zmm2, zmm3/m512 EVEX.NDS.512.F3.0F38.W0 26 /r
Opcode
0x26
Tested by
t5762
IiyVPTESTNMB:: PROC
    IiEncoding DATA=BYTE
    IiDisp8EVEX FVM
    IiEmitOpcode 0x26
.op:IiAllowMaskMerging
    IiOpEn RVM
    IiModRM /r
    IiDispatchFormat  krg.xmm.xmm, krg.xmm.mem, krg.ymm.ymm, krg.ymm.mem, krg.zmm.zmm, krg.zmm.mem
.krg.xmm.xmm:
.krg.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.F3.0F38.W0
    RET
.krg.ymm.ymm:
.krg.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.F3.0F38.W0
    RET
.krg.zmm.zmm:
.krg.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.F3.0F38.W0
    RET
  ENDP IiyVPTESTNMB::
↑ VPTESTNMW
Logical NAND and Set
Intel reference
VPTESTNMW k2 {k1}, xmm2, xmm3/m128 EVEX.NDS.128.F3.0F38.W1 26 /r
VPTESTNMW k2 {k1}, ymm2, ymm3/m256 EVEX.NDS.256.F3.0F38.W1 26 /r
VPTESTNMW k2 {k1}, zmm2, zmm3/m512 EVEX.NDS.512.F3.0F38.W1 26 /r
Opcode
0x26
Tested by
t5762
IiyVPTESTNMW:: PROC
    IiEncoding DATA=WORD
    IiDisp8EVEX FVM
    IiEmitOpcode 0x26
.op:IiAllowMaskMerging
    IiOpEn RVM
    IiModRM /r
    IiDispatchFormat  krg.xmm.xmm, krg.xmm.mem, krg.ymm.ymm, krg.ymm.mem, krg.zmm.zmm, krg.zmm.mem
.krg.xmm.xmm:
.krg.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.F3.0F38.W1
    RET
.krg.ymm.ymm:
.krg.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.F3.0F38.W1
    RET
.krg.zmm.zmm:
.krg.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.F3.0F38.W1
    RET
  ENDP IiyVPTESTNMW::
↑ VPTESTNMD
Logical NAND and Set
Intel reference
VPTESTNMD k2 {k1}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.F3.0F38.W0 27 /r
VPTESTNMD k2 {k1}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.F3.0F38.W0 27 /r
VPTESTNMD k2 {k1}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.F3.0F38.W0 27 /r
Opcode
0x27
Tested by
t5762
IiyVPTESTNMD:: PROC
    IiAllowBroadcasting DWORD
    IiDisp8EVEX FV32
    IiEmitOpcode 0x27
    JMP IiyVPTESTNMB.op:
  ENDP IiyVPTESTNMD::
↑ VPTESTNMQ
Logical NAND and Set
Intel reference
VPTESTNMQ k2 {k1}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.F3.0F38.W1 27 /r
VPTESTNMQ k2 {k1}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.F3.0F38.W1 27 /r
VPTESTNMQ k2 {k1}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.F3.0F38.W1 27 /r
Opcode
0x27
Tested by
t5762
IiyVPTESTNMQ:: PROC
    IiAllowBroadcasting QWORD
    IiDisp8EVEX FV64
    IiEmitOpcode 0x27
    JMP IiyVPTESTNMW.op:
  ENDP IiyVPTESTNMQ::
↑ VPTERNLOGD
Bitwise Ternary Logic
Intel reference
VPTERNLOGD xmm1 {k1}{z}, xmm2, xmm3/m128/m32bcst, imm8 EVEX.DDS.128.66.0F3A.W0 25 /r ib
VPTERNLOGD ymm1 {k1}{z}, ymm2, ymm3/m256/m32bcst, imm8 EVEX.DDS.256.66.0F3A.W0 25 /r ib
VPTERNLOGD zmm1 {k1}{z}, zmm2, zmm3/m512/m32bcst, imm8 EVEX.DDS.512.66.0F3A.W0 25 /r ib
Opcode
0x25
Tested by
t5780
IiyVPTERNLOGD:: PROC
    IiAllowModifier MASK
    IiAllowBroadcasting DWORD, Operand=DH
    IiEmitOpcode 0x25
    IiOpEn RVM
    IiModRM /r
    IiEmitImm Operand4, BYTE
    IiDisp8EVEX FV32
    IiDispatchFormat  xmm.xmm.xmm.imm, xmm.xmm.mem.imm, ymm.ymm.ymm.imm, ymm.ymm.mem.imm, zmm.zmm.zmm.imm, zmm.zmm.mem.imm
.xmm.xmm.xmm.imm:
.xmm.xmm.mem.imm:
    IiEmitPrefix EVEX.DDS.128.66.0F3A.W0
    RET
.ymm.ymm.ymm.imm:
.ymm.ymm.mem.imm:
    IiEmitPrefix EVEX.DDS.256.66.0F3A.W0
    RET
.zmm.zmm.zmm.imm:
.zmm.zmm.mem.imm:
    IiEmitPrefix EVEX.DDS.512.66.0F3A.W0
    RET
  ENDP IiyVPTERNLOGD::
↑ VPTERNLOGQ
Bitwise Ternary Logic
Intel reference
VPTERNLOGQ xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst, imm8 EVEX.DDS.128.66.0F3A.W1 25 /r ib
VPTERNLOGQ ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst, imm8 EVEX.DDS.256.66.0F3A.W1 25 /r ib
VPTERNLOGQ zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst, imm8 EVEX.DDS.512.66.0F3A.W1 25 /r ib
Opcode
0x25
Tested by
t5780
IiyVPTERNLOGQ:: PROC
    IiAllowModifier MASK
    IiAllowBroadcasting QWORD, Operand=DH
    IiEmitOpcode 0x25
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV64
    IiEmitImm Operand4, BYTE
    IiDispatchFormat  xmm.xmm.xmm.imm, xmm.xmm.mem.imm, ymm.ymm.ymm.imm, ymm.ymm.mem.imm, zmm.zmm.zmm.imm, zmm.zmm.mem.imm
.xmm.xmm.xmm.imm:
.xmm.xmm.mem.imm:
    IiEmitPrefix EVEX.DDS.128.66.0F3A.W1
    RET
.ymm.ymm.ymm.imm:
.ymm.ymm.mem.imm:
    IiEmitPrefix EVEX.DDS.256.66.0F3A.W1
    RET
.zmm.zmm.zmm.imm:
.zmm.zmm.mem.imm:
    IiEmitPrefix EVEX.DDS.512.66.0F3A.W1
    RET
  ENDP IiyVPTERNLOGQ::
↑ VPALIGNR
Packed Align Right
Intel reference
VPALIGNR xmm1, xmm2, xmm3/m128, imm8 VEX.NDS.128.66.0F3A 0F /r ib
VPALIGNR ymm1, ymm2, ymm3/m256, imm8 VEX.NDS.256.66.0F3A 0F /r ib
VPALIGNR xmm1 {k1}{z}, xmm2, xmm3/m128, imm8 EVEX.NDS.128.66.0F3A.WIG 0F /r ib
VPALIGNR ymm1 {k1}{z}, ymm2, ymm3/m256, imm8 EVEX.NDS.256.66.0F3A.WIG 0F /r ib
VPALIGNR zmm1 {k1}{z}, zmm2, zmm3/m512, imm8 EVEX.NDS.512.66.0F3A.WIG 0F /r ib
Category
ssse3,simdint
Operands
Pq,Qq | Vdq,Wdq
Opcode
0x0F3A0F /r | 0x660F3A0F /r
CPU
C2+
Tested by
t5780
IiyVPALIGNR:: PROC
    IiRequire SSSE3
    IiAllowModifier MASK
    IiEmitOpcode 0x0F
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FVM
    IiEmitImm Operand4, BYTE
    IiDispatchFormat  xmm.xmm.xmm.imm, xmm.xmm.mem.imm, ymm.ymm.ymm.imm, ymm.ymm.mem.imm, zmm.zmm.zmm.imm, zmm.zmm.mem.imm
.xmm.xmm.xmm.imm:
.xmm.xmm.mem.imm:
    IiEmitPrefix VEX.NDS.128.66.0F3A, EVEX.NDS.128.66.0F3A.WIG
    RET
.ymm.ymm.ymm.imm:
.ymm.ymm.mem.imm:
    IiEmitPrefix VEX.NDS.256.66.0F3A, EVEX.NDS.256.66.0F3A.WIG
    RET
.zmm.zmm.zmm.imm:
.zmm.zmm.mem.imm:
    IiEmitPrefix EVEX.NDS.512.66.0F3A.WIG
    RET
  ENDP IiyVPALIGNR::
↑ VPCMPB
Compare Packed Signed BYTE Values Into Mask
Intel reference
VPCMPB k1 {k2}, xmm2, xmm3/m128, imm8 EVEX.NDS.128.66.0F3A.W0 3F /r ib
VPCMPB k1 {k2}, ymm2, ymm3/m256, imm8 EVEX.NDS.256.66.0F3A.W0 3F /r ib
VPCMPB k1 {k2}, zmm2, zmm3/m512, imm8 EVEX.NDS.512.66.0F3A.W0 3F /r ib
Opcode
0x3F
Tested by
t5790
IiyVPCMPB:: PROC
    MOV AL,0x3F
.op:IiEmitOpcode EAX
    IiAllowModifier CODE
    IiAllowMaskMerging
    IiEncoding CODE=LONG,DATA=BYTE
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FVM
    IiEmitImm Operand4, BYTE, Max=7
    IiDispatchFormat  krg.xmm.xmm.imm, krg.xmm.mem.imm, krg.ymm.ymm.imm, krg.ymm.mem.imm, krg.zmm.zmm.imm, krg.zmm.mem.imm
.cc:SHL EDX,8 ; This entry is called with format krg,regmm,regmm/mem (no immediate).
    MOV DL,imm ; Convert that format to krg,regmm,regmm/mem,imm. 
    MOV [EDI+II.Operand4+EXP.Low],CL ; Create imm value from cc mnemonic (0..7).
    MOVB [EDI+II.Operand4+EXP.Status],'N'
    JMP IiyVPCMPB.op: ; Continue as if the condition were specified by imm value.    
.krg.xmm.xmm.imm:
.krg.xmm.mem.imm:
    IiEmitPrefix EVEX.NDS.128.66.0F3A.W0
    RET
.krg.ymm.ymm.imm:
.krg.ymm.mem.imm:
    IiEmitPrefix EVEX.NDS.256.66.0F3A.W0
    RET
.krg.zmm.zmm.imm:
.krg.zmm.mem.imm:
    IiEmitPrefix EVEX.NDS.512.66.0F3A.W0
    RET
  ENDP IiyVPCMPB::
↑ VPCMPUB
Compare Packed Unsigned BYTE Values Into Mask
Intel reference
VPCMPUB k1 {k2}, xmm2, xmm3/m128, imm8 EVEX.NDS.128.66.0F3A.W0 3E /r ib
VPCMPUB k1 {k2}, ymm2, ymm3/m256, imm8 EVEX.NDS.256.66.0F3A.W0 3E /r ib
VPCMPUB k1 {k2}, zmm2, zmm3/m512, imm8 EVEX.NDS.512.66.0F3A.W0 3E /r ib
Opcode
0x3E
Tested by
t5790
IiyVPCMPUB:: PROC
    MOV AL,0x3E
    JMP IiyVPCMPB.op:
  ENDP IiyVPCMPUB::
↑ VPCMPW
Compare Packed Signed WORD Values Into Mask
Intel reference
VPCMPW k1 {k2}, xmm2, xmm3/m128, imm8 EVEX.NDS.128.66.0F3A.W1 3F /r ib
VPCMPW k1 {k2}, ymm2, ymm3/m256, imm8 EVEX.NDS.256.66.0F3A.W1 3F /r ib
VPCMPW k1 {k2}, zmm2, zmm3/m512, imm8 EVEX.NDS.512.66.0F3A.W1 3F /r ib
Opcode
0x3F
Tested by
t5790
IiyVPCMPW:: PROC
    MOV AL,0x3F
.op:IiEmitOpcode EAX
    IiAllowModifier CODE
    IiAllowMaskMerging
    IiEncoding CODE=LONG,DATA=WORD
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FVM
    IiEmitImm Operand4, BYTE, Max=7
    IiDispatchFormat  krg.xmm.xmm.imm, krg.xmm.mem.imm, krg.ymm.ymm.imm, krg.ymm.mem.imm, krg.zmm.zmm.imm, krg.zmm.mem.imm
.cc:SHL EDX,8 ; This entry is called with format krg,regmm,regmm/mem (no immediate).
    MOV DL,imm ; Convert that format to krg,regmm,regmm/mem,imm. 
    MOV [EDI+II.Operand4+EXP.Low],CL ; Create imm value from cc mnemonic (0..7).
    MOVB [EDI+II.Operand4+EXP.Status],'N'
    JMP IiyVPCMPW.op: ; Continue as if the condition were specified by imm value.    
.krg.xmm.xmm.imm:
.krg.xmm.mem.imm:
    IiEmitPrefix EVEX.NDS.128.66.0F3A.W1
    RET
.krg.ymm.ymm.imm:
.krg.ymm.mem.imm:
    IiEmitPrefix EVEX.NDS.256.66.0F3A.W1
    RET
.krg.zmm.zmm.imm:
.krg.zmm.mem.imm:
    IiEmitPrefix EVEX.NDS.512.66.0F3A.W1
    RET
  ENDP IiyVPCMPW::
↑ VPCMPUW
Compare Packed Unsigned WORD Values Into Mask
Intel reference
VPCMPUW k1 {k2}, xmm2, xmm3/m128, imm8 EVEX.NDS.128.66.0F3A.W1 3E /r ib
VPCMPUW k1 {k2}, ymm2, ymm3/m256, imm8 EVEX.NDS.256.66.0F3A.W1 3E /r ib
VPCMPUW k1 {k2}, zmm2, zmm3/m512, imm8 EVEX.NDS.512.66.0F3A.W1 3E /r ib
Opcode
0x3E
Tested by
t5790
IiyVPCMPUW:: PROC
    MOV AL,0x3E
    JMP IiyVPCMPW.op:
  ENDP IiyVPCMPUW::
↑ VPCMPD
Compare Packed Signed Integer DWORD Values into Mask
Intel reference
VPCMPD k1 {k2}, xmm2, xmm3/m128/m32bcst, imm8 EVEX.NDS.128.66.0F3A.W0 1F /r ib
VPCMPD k1 {k2}, ymm2, ymm3/m256/m32bcst, imm8 EVEX.NDS.256.66.0F3A.W0 1F /r ib
VPCMPD k1 {k2}, zmm2, zmm3/m512/m32bcst, imm8 EVEX.NDS.512.66.0F3A.W0 1F /r ib
VPCMPD k1 {k2}, zmm2, zmm3/m512/m32bcst, imm8 MVEX.NDS.512.66.0F3A.W0 1F /r ib
Opcode
0x1F
Tested by
t5792
IiyVPCMPD:: PROC
    MOV AL,0x1F
.op:IiEmitOpcode EAX
    IiAllowModifier CODE
    IiEncoding CODE=LONG
    IiAllowMaskMerging
    IiAllowBroadcasting DWORD, Operand=DH
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV32
    IiDisp8MVEX Si32
    IiEmitImm Operand4, BYTE, Max=7
    IiDispatchFormat  krg.xmm.xmm.imm, krg.xmm.mem.imm, krg.ymm.ymm.imm, krg.ymm.mem.imm, krg.zmm.zmm.imm, krg.zmm.mem.imm
.cc:SHL EDX,8 ; This entry is called with format xmm/krg,xmm,xmm/mem (no immediate).
    MOV DL,imm ; Convert that format to xmm/krg,xmm,xmm/mem,imm. 
    MOV [EDI+II.Operand4+EXP.Low],CL ; Create imm value from cc mnemonic (0..31).
    MOVB [EDI+II.Operand4+EXP.Status],'N'
    JMP IiyVPCMPD.op: ; Continue as if the condition were specified by imm value.    
.krg.xmm.xmm.imm:
.krg.xmm.mem.imm:
    IiEmitPrefix EVEX.NDS.128.66.0F3A.W0
    RET
.krg.ymm.ymm.imm:
.krg.ymm.mem.imm:
    IiEmitPrefix EVEX.NDS.256.66.0F3A.W0
    RET
.krg.zmm.zmm.imm:
.krg.zmm.mem.imm:
    IiEmitPrefix EVEX.NDS.512.66.0F3A.W0, MVEX.NDS.512.66.0F3A.W0
    RET
  ENDP IiyVPCMPD::
↑ VPCMPUD
Compare Packed Unsigned Integer DWORD Values into Mask
Intel reference
VPCMPUD k1 {k2}, xmm2, xmm3/m128/m32bcst, imm8 EVEX.NDS.128.66.0F3A.W0 1E /r ib
VPCMPUD k1 {k2}, ymm2, ymm3/m256/m32bcst, imm8 EVEX.NDS.256.66.0F3A.W0 1E /r ib
VPCMPUD k1 {k2}, zmm2, zmm3/m512/m32bcst, imm8 EVEX.NDS.512.66.0F3A.W0 1E /r ib
VPCMPUD k1 {k2}, zmm2, zmm3/m512/m32bcst, imm8 MVEX.NDS.512.66.0F3A.W0 1E /r ib
Opcode
0x1E
Tested by
t5792
IiyVPCMPUD:: PROC
    MOV AL,0x1E
    JMP IiyVPCMPD.op:
  ENDP IiyVPCMPUD::
↑ VPCMPQ
Compare Packed Unsigned Integer QWORD Values into Mask
Intel reference
VPCMPQ k1 {k2}, xmm2, xmm3/m128/m64bcst, imm8 EVEX.NDS.128.66.0F3A.W1 1F /r ib
VPCMPQ k1 {k2}, ymm2, ymm3/m256/m64bcst, imm8 EVEX.NDS.256.66.0F3A.W1 1F /r ib
VPCMPQ k1 {k2}, zmm2, zmm3/m512/m64bcst, imm8EVEX.NDS.512.66.0F3A.W1 1F /r ib
Opcode
0x1F
Tested by
t5792
IiyVPCMPQ:: PROC
    MOV AL,0x1F
.op:IiEmitOpcode EAX
    IiAllowModifier CODE
    IiEncoding CODE=LONG
    IiAllowMaskMerging
    IiAllowBroadcasting QWORD, Operand=DH
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV64
    IiEmitImm Operand4, BYTE, Max=7
    IiDispatchFormat  krg.xmm.xmm.imm, krg.xmm.mem.imm, krg.ymm.ymm.imm, krg.ymm.mem.imm, krg.zmm.zmm.imm, krg.zmm.mem.imm
.cc:SHL EDX,8 ; This entry is called with format xmm/krg,xmm,xmm/mem (no immediate).
    MOV DL,imm ; Convert that format to xmm/krg,xmm,xmm/mem,imm. 
    MOV [EDI+II.Operand4+EXP.Low],CL ; Create imm value from cc mnemonic (0..31).
    MOVB [EDI+II.Operand4+EXP.Status],'N'
    JMP IiyVPCMPQ.op: ; Continue as if the condition were specified by imm value.    
.krg.xmm.xmm.imm:
.krg.xmm.mem.imm:
    IiEmitPrefix EVEX.NDS.128.66.0F3A.W1
    RET
.krg.ymm.ymm.imm:
.krg.ymm.mem.imm:
    IiEmitPrefix EVEX.NDS.256.66.0F3A.W1
    RET
.krg.zmm.zmm.imm:
.krg.zmm.mem.imm:
    IiEmitPrefix EVEX.NDS.512.66.0F3A.W1
    RET
  ENDP IiyVPCMPQ::
↑ VPCMPUQ
Compare Packed Unsigned Integer QWORD Values into Mask
Intel reference
VPCMPUQ k1 {k2}, xmm2, xmm3/m128/m64bcst, imm8 EVEX.NDS.128.66.0F3A.W1 1E /r ib
VPCMPUQ k1 {k2}, ymm2, ymm3/m256/m64bcst, imm8 EVEX.NDS.256.66.0F3A.W1 1E /r ib
VPCMPUQ k1 {k2}, zmm2, zmm3/m512/m64bcst, imm8 EVEX.NDS.512.66.0F3A.W1 1E /r ib
Opcode
0x1E
Tested by
t5792
IiyVPCMPUQ:: PROC
    MOV AL,0x1E
    JMP IiyVPCMPQ.op:
  ENDP IiyVPCMPUQ::
↑ VPCMPEQB
Compare if Equal Packed signed BYTE values into mask
Intel reference
VPCMPEQB xmm1, xmm2, xmm3 /m128 VEX.NDS.128.66.0F.WIG 74 /r
VPCMPEQB ymm1, ymm2, ymm3 /m256 VEX.NDS.256.66.0F.WIG 74 /r
VPCMPEQB k1 {k2}, xmm2, xmm3 /m128 EVEX.NDS.128.66.0F.WIG 74 /r
VPCMPEQB k1 {k2}, ymm2, ymm3 /m256 EVEX.NDS.256.66.0F.WIG 74 /r
VPCMPEQB k1 {k2}, zmm2, zmm3 /m512 EVEX.NDS.512.66.0F.WIG 74 /r
VPCMPEQB k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 3F /r 00
VPCMPEQB k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 3F /r 00
VPCMPEQB k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 3F /r 00
Opcode
3F /r 0x00
Tested by
t5794 t5797
IiyVPCMPEQB:: PROC
    MOV AL,0x3F
    MOV CL,0x00
    MOV EBX,EDX
    SHR EBX,16
    CMP BL,krg
    JNE .S:
    IiDispatchCode SHORT= .S:, LONG=IiyVPCMPB.cc:
.S: IiEncoding CODE=SHORT,DATA=BYTE
    IiAllowModifier CODE
    IiAllowMaskMerging
    IiEmitOpcode 0x74
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FVM
    IiDispatchFormat xmm.xmm.xmm,xmm.xmm.mem,ymm.ymm.ymm,ymm.ymm.mem, \
          krg.xmm.xmm,krg.xmm.mem,krg.ymm.ymm,krg.ymm.mem,krg.zmm.zmm,krg.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F.WIG
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F.WIG
    RET
.krg.xmm.xmm:
.krg.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F.WIG
    RET
.krg.ymm.ymm:
.krg.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F.WIG
    RET
.krg.zmm.zmm:
.krg.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F.WIG
    RET
   ENDP IiyVPCMPEQB::
↑ VPCMPLTB
Compare if Less Than Packed signed BYTE values into mask
Intel reference
VPCMPLTB k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 3F /r 01
VPCMPLTB k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 3F /r 01
VPCMPLTB k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 3F /r 01
Opcode
3F /r 0x01
Tested by
t5794
IiyVPCMPLTB:: PROC
    MOV AL,0x3F
    MOV CL,0x01
    JMP IiyVPCMPB.cc:
 ENDP IiyVPCMPLTB::
↑ VPCMPLEB
Compare if Less than or Equal Packed signed BYTE values into mask
Intel reference
VPCMPLEB k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 3F /r 02
VPCMPLEB k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 3F /r 02
VPCMPLEB k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 3F /r 02
Opcode
3F /r 0x02
Tested by
t5794
IiyVPCMPLEB:: PROC
    MOV AL,0x3F
    MOV CL,0x02
    JMP IiyVPCMPB.cc:
 ENDP IiyVPCMPLEB::
↑ VPCMPFALSEB
Compare if False Packed signed BYTE values into mask
Intel reference
VPCMPFALSEB k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 3F /r 03
VPCMPFALSEB k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 3F /r 03
VPCMPFALSEB k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 3F /r 03
Opcode
3F /r 0x03
Tested by
t5794
IiyVPCMPFALSEB:: PROC
    MOV AL,0x3F
    MOV CL,0x03
    JMP IiyVPCMPB.cc:
 ENDP IiyVPCMPFALSEB::
↑ VPCMPNEQB
Compare if Not Equal Packed signed BYTE values into mask
Intel reference
VPCMPNEQB k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 3F /r 04
VPCMPNEQB k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 3F /r 04
VPCMPNEQB k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 3F /r 04
Opcode
3F /r 0x04
Tested by
t5794
IiyVPCMPNEQB:: PROC
    MOV AL,0x3F
    MOV CL,0x04
    JMP IiyVPCMPB.cc:
 ENDP IiyVPCMPNEQB::
↑ VPCMPNLTB
Compare if Not Less Than Packed signed BYTE values into mask
Intel reference
VPCMPNLTB k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 3F /r 05
VPCMPNLTB k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 3F /r 05
VPCMPNLTB k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 3F /r 05
Opcode
3F /r 0x05
Tested by
t5794
IiyVPCMPNLTB:: PROC
    MOV AL,0x3F
    MOV CL,0x05
    JMP IiyVPCMPB.cc:
 ENDP IiyVPCMPNLTB::
↑ VPCMPNLEB
Compare if Not Less than or Equal Packed signed BYTE values into mask
Intel reference
VPCMPNLEB k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 3F /r 06
VPCMPNLEB k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 3F /r 06
VPCMPNLEB k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 3F /r 06
Opcode
3F /r 0x06
Tested by
t5794
IiyVPCMPNLEB:: PROC
    MOV AL,0x3F
    MOV CL,0x06
    JMP IiyVPCMPB.cc:
 ENDP IiyVPCMPNLEB::
↑ VPCMPTRUEB
Compare if True Packed signed BYTE values into mask
Intel reference
VPCMPTRUEB k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 3F /r 07
VPCMPTRUEB k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 3F /r 07
VPCMPTRUEB k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 3F /r 07
Opcode
3F /r 0x07
Tested by
t5794
IiyVPCMPTRUEB:: PROC
    MOV AL,0x3F
    MOV CL,0x07
    JMP IiyVPCMPB.cc:
 ENDP IiyVPCMPTRUEB::
↑ VPCMPEQUB
Compare if Equal Packed Unsigned BYTE values into mask
Intel reference
VPCMPEQUB k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 3E /r 00
VPCMPEQUB k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 3E /r 00
VPCMPEQUB k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 3E /r 00
Opcode
3E /r 0x00
Tested by
t5794
IiyVPCMPEQUB:: PROC
    MOV AL,0x3E
    MOV CL,0x00
    JMP IiyVPCMPB.cc
 ENDP IiyVPCMPEQUB::
↑ VPCMPLTUB
Compare if Less Than Packed Unsigned BYTE values into mask
Intel reference
VPCMPLTUB k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 3E /r 01
VPCMPLTUB k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 3E /r 01
VPCMPLTUB k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 3E /r 01
Opcode
3E /r 0x01
Tested by
t5794
IiyVPCMPLTUB:: PROC
    MOV AL,0x3E
    MOV CL,0x01
    JMP IiyVPCMPB.cc
 ENDP IiyVPCMPLTUB::
↑ VPCMPLEUB
Compare if Less than or Equal Packed Unsigned BYTE values into mask
Intel reference
VPCMPLEUB k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 3E /r 02
VPCMPLEUB k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 3E /r 02
VPCMPLEUB k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 3E /r 02
Opcode
3E /r 0x02
Tested by
t5794
IiyVPCMPLEUB:: PROC
    MOV AL,0x3E
    MOV CL,0x02
    JMP IiyVPCMPB.cc
 ENDP IiyVPCMPLEUB::
↑ VPCMPFALSEUB
Compare if False Packed Unsigned BYTE values into mask
Intel reference
VPCMPFALSEUB k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 3E /r 03
VPCMPFALSEUB k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 3E /r 03
VPCMPFALSEUB k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 3E /r 03
Opcode
3E /r 0x03
Tested by
t5794
IiyVPCMPFALSEUB:: PROC
    MOV AL,0x3E
    MOV CL,0x03
    JMP IiyVPCMPB.cc
 ENDP IiyVPCMPFALSEUB::
↑ VPCMPNEQUB
Compare if Not Equal Packed Unsigned BYTE values into mask
Intel reference
VPCMPNEQUB k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 3E /r 04
VPCMPNEQUB k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 3E /r 04
VPCMPNEQUB k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 3E /r 04
Opcode
3E /r 0x04
Tested by
t5794
IiyVPCMPNEQUB:: PROC
    MOV AL,0x3E
    MOV CL,0x04
    JMP IiyVPCMPB.cc
 ENDP IiyVPCMPNEQUB::
↑ VPCMPNLTUB
Compare if Not Less Than Packed Unsigned BYTE values into mask
Intel reference
VPCMPNLTUB k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 3E /r 05
VPCMPNLTUB k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 3E /r 05
VPCMPNLTUB k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 3E /r 05
Opcode
3E /r 0x05
Tested by
t5794
IiyVPCMPNLTUB:: PROC
    MOV AL,0x3E
    MOV CL,0x05
    JMP IiyVPCMPB.cc
 ENDP IiyVPCMPNLTUB::
↑ VPCMPNLEUB
Compare if Not Less than or Equal Packed Unsigned BYTE values into mask
Intel reference
VPCMPNLEUB k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 3E /r 06
VPCMPNLEUB k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 3E /r 06
VPCMPNLEUB k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 3E /r 06
Opcode
3E /r 0x06
Tested by
t5794
IiyVPCMPNLEUB:: PROC
    MOV AL,0x3E
    MOV CL,0x06
    JMP IiyVPCMPB.cc
 ENDP IiyVPCMPNLEUB::
↑ VPCMPTRUEUB
Compare if True Packed Unsigned BYTE values into mask
Intel reference
VPCMPTRUEUB k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 3E /r 07
VPCMPTRUEUB k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 3E /r 07
VPCMPTRUEUB k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 3E /r 07
Opcode
3E /r 0x07
Tested by
t5794
IiyVPCMPTRUEUB:: PROC
    MOV AL,0x3E
    MOV CL,0x07
    JMP IiyVPCMPB.cc
 ENDP IiyVPCMPTRUEUB::
↑ VPCMPEQW
Compare if Equal Packed signed WORD values into mask
Intel reference
VPCMPEQW xmm1, xmm2, xmm3 /m128 VEX.NDS.128.66.0F.WIG 75 /r
VPCMPEQW ymm1, ymm2, ymm3 /m256 VEX.NDS.256.66.0F.WIG 75 /r
VPCMPEQW k1 {k2}, xmm2, xmm3 /m128 EVEX.NDS.128.66.0F.WIG 75 /r
VPCMPEQW k1 {k2}, ymm2, ymm3 /m256 EVEX.NDS.256.66.0F.WIG 75 /r
VPCMPEQW k1 {k2}, zmm2, zmm3 /m512 EVEX.NDS.512.66.0F.WIG 75 /r
VPCMPEQW k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 3F /r 00
VPCMPEQW k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 3F /r 00
VPCMPEQW k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 3F /r 00
Opcode
3F /r 0x00
Tested by
t5794 t5797
IiyVPCMPEQW:: PROC
    MOV EBX,EDX
    MOV AL,0x3F
    MOV CL,0x00
    SHR EBX,16
    CMP BL,krg
    JNE .S:
    IiDispatchCode SHORT= .S:, LONG=IiyVPCMPW.cc:
.S: IiEncoding CODE=SHORT,DATA=WORD
    IiAllowModifier CODE
    IiAllowMaskMerging
    IiEmitOpcode 0x75
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FVM
    IiDispatchFormat xmm.xmm.xmm,xmm.xmm.mem,ymm.ymm.ymm,ymm.ymm.mem, \
          krg.xmm.xmm,krg.xmm.mem,krg.ymm.ymm,krg.ymm.mem,krg.zmm.zmm,krg.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F.WIG
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F.WIG
    RET
.krg.xmm.xmm:
.krg.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F.WIG
    RET
.krg.ymm.ymm:
.krg.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F.WIG
    RET
.krg.zmm.zmm:
.krg.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F.WIG
    RET
  ENDP IiyVPCMPEQW::
↑ VPCMPLTW
Compare if Less Than Packed signed WORD values into mask
Intel reference
VPCMPLTW k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 3F /r 01
VPCMPLTW k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 3F /r 01
VPCMPLTW k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 3F /r 01
Opcode
3F /r 0x01
Tested by
t5794
IiyVPCMPLTW:: PROC
    MOV AL,0x3F
    MOV CL,0x01
    JMP IiyVPCMPW.cc:
 ENDP IiyVPCMPLTW::
↑ VPCMPLEW
Compare if Less than or Equal Packed signed WORD values into mask
Intel reference
VPCMPLEW k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 3F /r 02
VPCMPLEW k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 3F /r 02
VPCMPLEW k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 3F /r 02
Opcode
3F /r 0x02
Tested by
t5794
IiyVPCMPLEW:: PROC
    MOV AL,0x3F
    MOV CL,0x02
    JMP IiyVPCMPW.cc:
 ENDP IiyVPCMPLEW::
↑ VPCMPFALSEW
Compare if False Packed signed WORD values into mask
Intel reference
VPCMPFALSEW k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 3F /r 03
VPCMPFALSEW k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 3F /r 03
VPCMPFALSEW k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 3F /r 03
Opcode
3F /r 0x03
Tested by
t5794
IiyVPCMPFALSEW:: PROC
    MOV AL,0x3F
    MOV CL,0x03
    JMP IiyVPCMPW.cc:
 ENDP IiyVPCMPFALSEW::
↑ VPCMPNEQW
Compare if Not Equal Packed signed WORD values into mask
Intel reference
VPCMPNEQW k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 3F /r 04
VPCMPNEQW k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 3F /r 04
VPCMPNEQW k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 3F /r 04
Opcode
3F /r 0x04
Tested by
t5794
IiyVPCMPNEQW:: PROC
    MOV AL,0x3F
    MOV CL,0x04
    JMP IiyVPCMPW.cc:
 ENDP IiyVPCMPNEQW::
↑ VPCMPNLTW
Compare if Not Less Than Packed signed WORD values into mask
Intel reference
VPCMPNLTW k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 3F /r 05
VPCMPNLTW k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 3F /r 05
VPCMPNLTW k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 3F /r 05
Opcode
3F /r 0x05
Tested by
t5794
IiyVPCMPNLTW:: PROC
    MOV AL,0x3F
    MOV CL,0x05
    JMP IiyVPCMPW.cc:
 ENDP IiyVPCMPNLTW::
↑ VPCMPNLEW
Compare if Not Less than or Equal Packed signed WORD values into mask
Intel reference
VPCMPNLEW k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 3F /r 06
VPCMPNLEW k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 3F /r 06
VPCMPNLEW k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 3F /r 06
Opcode
3F /r 0x06
Tested by
t5794
IiyVPCMPNLEW:: PROC
    MOV AL,0x3F
    MOV CL,0x06
    JMP IiyVPCMPW.cc:
 ENDP IiyVPCMPNLEW::
↑ VPCMPTRUEW
Compare if True Packed signed WORD values into mask
Intel reference
VPCMPTRUEW k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 3F /r 07
VPCMPTRUEW k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 3F /r 07
VPCMPTRUEW k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 3F /r 07
Opcode
3F /r 0x07
Tested by
t5794
IiyVPCMPTRUEW:: PROC
    MOV AL,0x3F
    MOV CL,0x07
    JMP IiyVPCMPW.cc:
 ENDP IiyVPCMPTRUEW::
↑ VPCMPEQUW
Compare if Equal Packed Unsigned WORD values into mask
Intel reference
VPCMPEQUW k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 3E /r 00
VPCMPEQUW k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 3E /r 00
VPCMPEQUW k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 3E /r 00
Opcode
3E /r 0x00
Tested by
t5794
IiyVPCMPEQUW:: PROC
    MOV AL,0x3E
    MOV CL,0x00
    JMP IiyVPCMPW.cc
 ENDP IiyVPCMPEQUW::
↑ VPCMPLTUW
Compare if Less Than Packed Unsigned WORD values into mask
Intel reference
VPCMPLTUW k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 3E /r 01
VPCMPLTUW k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 3E /r 01
VPCMPLTUW k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 3E /r 01
Opcode
3E /r 0x01
Tested by
t5794
IiyVPCMPLTUW:: PROC
    MOV AL,0x3E
    MOV CL,0x01
    JMP IiyVPCMPW.cc
 ENDP IiyVPCMPLTUW::
↑ VPCMPLEUW
Compare if Less than or Equal Packed Unsigned WORD values into mask
Intel reference
VPCMPLEUW k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 3E /r 02
VPCMPLEUW k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 3E /r 02
VPCMPLEUW k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 3E /r 02
Opcode
3E /r 0x02
Tested by
t5794
IiyVPCMPLEUW:: PROC
    MOV AL,0x3E
    MOV CL,0x02
    JMP IiyVPCMPW.cc
 ENDP IiyVPCMPLEUW::
↑ VPCMPFALSEUW
Compare if False Packed Unsigned WORD values into mask
Intel reference
VPCMPFALSEUW k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 3E /r 03
VPCMPFALSEUW k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 3E /r 03
VPCMPFALSEUW k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 3E /r 03
Opcode
3E /r 0x03
Tested by
t5794
IiyVPCMPFALSEUW:: PROC
    MOV AL,0x3E
    MOV CL,0x03
    JMP IiyVPCMPW.cc
 ENDP IiyVPCMPFALSEUW::
↑ VPCMPNEQUW
Compare if Not Equal Packed Unsigned WORD values into mask
Intel reference
VPCMPNEQUW k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 3E /r 04
VPCMPNEQUW k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 3E /r 04
VPCMPNEQUW k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 3E /r 04
Opcode
3E /r 0x04
Tested by
t5794
IiyVPCMPNEQUW:: PROC
    MOV AL,0x3E
    MOV CL,0x04
    JMP IiyVPCMPW.cc
 ENDP IiyVPCMPNEQUW::
↑ VPCMPNLTUW
Compare if Not Less Than Packed Unsigned WORD values into mask
Intel reference
VPCMPNLTUW k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 3E /r 05
VPCMPNLTUW k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 3E /r 05
VPCMPNLTUW k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 3E /r 05
Opcode
3E /r 0x05
Tested by
t5794
IiyVPCMPNLTUW:: PROC
    MOV AL,0x3E
    MOV CL,0x05
    JMP IiyVPCMPW.cc
 ENDP IiyVPCMPNLTUW::
↑ VPCMPNLEUW
Compare if Not Less than or Equal Packed Unsigned WORD values into mask
Intel reference
VPCMPNLEUW k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 3E /r 06
VPCMPNLEUW k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 3E /r 06
VPCMPNLEUW k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 3E /r 06
Opcode
3E /r 0x06
Tested by
t5794
IiyVPCMPNLEUW:: PROC
    MOV AL,0x3E
    MOV CL,0x06
    JMP IiyVPCMPW.cc
 ENDP IiyVPCMPNLEUW::
↑ VPCMPTRUEUW
Compare if True Packed Unsigned WORD values into mask
Intel reference
VPCMPTRUEUW k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 3E /r 07
VPCMPTRUEUW k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 3E /r 07
VPCMPTRUEUW k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 3E /r 07
Opcode
3E /r 0x07
Tested by
t5794
IiyVPCMPTRUEUW:: PROC
    MOV AL,0x3E
    MOV CL,0x07
    JMP IiyVPCMPW.cc
 ENDP IiyVPCMPTRUEUW::
↑ VPCMPEQD
Compare if Equal Packed signed DWORD values into mask
Intel reference
VPCMPEQD xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F.WIG 76 /r
VPCMPEQD ymm1, ymm2, ymm3 /m256 VEX.NDS.256.66.0F.WIG 76 /r
VPCMPEQD k1 {k2}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.66.0F.W0 76 /r
VPCMPEQD k1 {k2}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.66.0F.W0 76 /r
VPCMPEQD k1 {k2}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.66.0F.W0 76 /r
VPCMPEQD k1 {k2}, zmm2, zmm3/m512/m32bcst MVEX.NDS.512.66.0F.W0 76 /r
VPCMPEQD k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 1F /r 00
VPCMPEQD k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 1F /r 00
VPCMPEQD k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 1F /r 00
VPCMPEQD k1 {k2}, zmm2, zmm3/m512 MVEX.NDS.512.66.0F3A.W0 1F /r 00
Opcode
1F /r 0x00
Tested by
t5796 t5797
IiyVPCMPEQD:: PROC
    MOV EBX,EDX
    MOV AL,0x1F
    MOV CL,0x00
    SHR EBX,16
    CMP BL,krg
    JNE .S:
    IiDispatchCode SHORT= .S:, LONG=IiyVPCMPD.cc:
 .S:IiEncoding CODE=SHORT,DATA=DWORD
    IiAllowModifier CODE
    IiAllowMaskMerging
    IiAllowBroadcasting DWORD
    IiEmitOpcode 0x76
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV32
    IiDisp8MVEX Si32
    IiDispatchFormat xmm.xmm.xmm,xmm.xmm.mem,ymm.ymm.ymm,ymm.ymm.mem, \
          krg.xmm.xmm,krg.xmm.mem,krg.ymm.ymm,krg.ymm.mem,krg.zmm.zmm,krg.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F.WIG
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F.WIG
    RET
.krg.xmm.xmm:
.krg.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F.W0
    RET
.krg.ymm.ymm:
.krg.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F.W0
    RET
.krg.zmm.zmm:
.krg.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F.W0, MVEX.NDS.512.66.0F.W0
    RET
 ENDP IiyVPCMPEQD::
↑ VPCMPLTD
Compare if Less Than Packed signed DWORD values into mask
Intel reference
VPCMPLTD k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 1F /r 01
VPCMPLTD k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 1F /r 01
VPCMPLTD k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 1F /r 01
VPCMPLTD k1 {k2}, zmm2, zmm3/m512 MVEX.NDS.512.66.0F3A.W0 1F /r 01
VPCMPLTD k1 {k2}, zmm2, zmm3/m512 MVEX.NDS.512.66.0F38.W0 74 /r
Opcode
1F /r 0x01
Tested by
t5796 t5797
IiyVPCMPLTD:: PROC
    MOV AL,0x1F
    MOV CL,0x01
    CMP DH,zmm ; Operand2. 
    JNE IiyVPCMPD.cc: ; Use long version 66.0F3A 1F ib 
    JNSt [EDI+II.MfxExplicit],iiMfxPREFIX_MVEX | iiMfxEH_Mask, IiyVPCMPD.cc: ; Use long version.
    JSt [EDI+II.MfgExplicit],iiMfgCODE_LONG, IiyVPCMPD.cc: ; Use long version.
    IiEncoding CODE=SHORT
    IiAllowModifier CODE
    IiAllowMaskMerging
    IiAllowBroadcasting DWORD
    IiEmitOpcode 0x74
    IiOpEn RVM
    IiModRM /r
    IiDisp8MVEX Si32
    IiDispatchFormat krg.zmm.zmm, krg.zmm.mem
.krg.zmm.zmm:
.krg.zmm.mem:
    IiEmitPrefix MVEX.NDS.512.66.0F38.W0
    RET    
 ENDP IiyVPCMPLTD::
↑ VPCMPLED
Compare if Less than or Equal Packed signed DWORD values into mask
Intel reference
VPCMPLED k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 1F /r 02
VPCMPLED k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 1F /r 02
VPCMPLED k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 1F /r 02
VPCMPLED k1 {k2}, zmm2, zmm3/m512 MVEX.NDS.512.66.0F3A.W0 1F /r 02
Opcode
1F /r 0x02
Tested by
t5796
IiyVPCMPLED:: PROC
    MOV AL,0x1F
    MOV CL,0x02
    JMP IiyVPCMPD.cc:
 ENDP IiyVPCMPLED::
↑ VPCMPFALSED
Compare if False Packed signed DWORD values into mask
Intel reference
VPCMPFALSED k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 1F /r 03
VPCMPFALSED k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 1F /r 03
VPCMPFALSED k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 1F /r 03
Opcode
1F /r 0x03
Tested by
t5796
IiyVPCMPFALSED:: PROC
    MOV AL,0x1F
    MOV CL,0x03
    JMP IiyVPCMPD.cc:
 ENDP IiyVPCMPFALSED::
↑ VPCMPNEQD
Compare if Not Equal Packed signed DWORD values into mask
Intel reference
VPCMPNEQD k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 1F /r 04
VPCMPNEQD k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 1F /r 04
VPCMPNEQD k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 1F /r 04
VPCMPNEQD k1 {k2}, zmm2, zmm3/m512 MVEX.NDS.512.66.0F3A.W0 1F /r 04
Opcode
1F /r 0x04
Tested by
t5796
IiyVPCMPNEQD:: PROC
    MOV AL,0x1F
    MOV CL,0x04
    JMP IiyVPCMPD.cc:
 ENDP IiyVPCMPNEQD::
↑ VPCMPNLTD
Compare if Not Less Than Packed signed DWORD values into mask
Intel reference
VPCMPNLTD k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 1F /r 05
VPCMPNLTD k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 1F /r 05
VPCMPNLTD k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 1F /r 05
VPCMPNLTD k1 {k2}, zmm2, zmm3/m512 MVEX.NDS.512.66.0F3A.W0 1F /r 05
Opcode
1F /r 0x05
Tested by
t5796
IiyVPCMPNLTD:: PROC
    MOV AL,0x1F
    MOV CL,0x05
    JMP IiyVPCMPD.cc:
 ENDP IiyVPCMPNLTD::
↑ VPCMPNLED
Compare if Not Less than or Equal Packed signed DWORD values into mask
Intel reference
VPCMPNLED k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 1F /r 06
VPCMPNLED k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 1F /r 06
VPCMPNLED k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 1F /r 06
VPCMPNLED k1 {k2}, zmm2, zmm3/m512 MVEX.NDS.512.66.0F3A.W0 1F /r 06
Opcode
1F /r 0x06
Tested by
t5796
IiyVPCMPNLED:: PROC
    MOV AL,0x1F
    MOV CL,0x06
    JMP IiyVPCMPD.cc:
 ENDP IiyVPCMPNLED::
↑ VPCMPTRUED
Compare if True Packed signed DWORD values into mask
Intel reference
VPCMPTRUED k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 1F /r 07
VPCMPTRUED k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 1F /r 07
VPCMPTRUED k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 1F /r 07
Opcode
1F /r 0x07
Tested by
t5796
IiyVPCMPTRUED:: PROC
    MOV AL,0x1F
    MOV CL,0x07
    JMP IiyVPCMPD.cc:
 ENDP IiyVPCMPTRUED::
↑ VPCMPEQUD
Compare if Equal Packed Unsigned DWORD values into mask
Intel reference
VPCMPEQUD k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 1E /r 00
VPCMPEQUD k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 1E /r 00
VPCMPEQUD k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 1E /r 00
VPCMPEQUD k1 {k2}, zmm2, zmm3/m512 MVEX.NDS.512.66.0F3A.W0 1E /r 00
Opcode
1E /r 0x00
Tested by
t5796
IiyVPCMPEQUD:: PROC
    MOV AL,0x1E
    MOV CL,0x00
    JMP IiyVPCMPD.cc
 ENDP IiyVPCMPEQUD::
↑ VPCMPLTUD
Compare if Less Than Packed Unsigned DWORD values into mask
Intel reference
VPCMPLTUD k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 1E /r 01
VPCMPLTUD k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 1E /r 01
VPCMPLTUD k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 1E /r 01
VPCMPLTUD k1 {k2}, zmm2, zmm3/m512 MVEX.NDS.512.66.0F3A.W0 1E /r 01
Opcode
1E /r 0x01
Tested by
t5796
IiyVPCMPLTUD:: PROC
    MOV AL,0x1E
    MOV CL,0x01
    JMP IiyVPCMPD.cc
 ENDP IiyVPCMPLTUD::
↑ VPCMPLEUD
Compare if Less than or Equal Packed Unsigned DWORD values into mask
Intel reference
VPCMPLEUD k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 1E /r 02
VPCMPLEUD k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 1E /r 02
VPCMPLEUD k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 1E /r 02
VPCMPLEUD k1 {k2}, zmm2, zmm3/m512 MVEX.NDS.512.66.0F3A.W0 1E /r 02
Opcode
1E /r 0x02
Tested by
t5796
IiyVPCMPLEUD:: PROC
    MOV AL,0x1E
    MOV CL,0x02
    JMP IiyVPCMPD.cc
 ENDP IiyVPCMPLEUD::
↑ VPCMPFALSEUD
Compare if False Packed Unsigned DWORD values into mask
Intel reference
VPCMPFALSEUD k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 1E /r 03
VPCMPFALSEUD k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 1E /r 03
VPCMPFALSEUD k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 1E /r 03
Opcode
1E /r 0x03
Tested by
t5796
IiyVPCMPFALSEUD:: PROC
    MOV AL,0x1E
    MOV CL,0x03
    JMP IiyVPCMPD.cc
 ENDP IiyVPCMPFALSEUD::
↑ VPCMPNEQUD
Compare if Not Equal Packed Unsigned DWORD values into mask
Intel reference
VPCMPNEQUD k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 1E /r 04
VPCMPNEQUD k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 1E /r 04
VPCMPNEQUD k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 1E /r 04
VPCMPNEQUD k1 {k2}, zmm2, zmm3/m512 MVEX.NDS.512.66.0F3A.W0 1E /r 04
Opcode
1E /r 0x04
Tested by
t5796
IiyVPCMPNEQUD:: PROC
    MOV AL,0x1E
    MOV CL,0x04
    JMP IiyVPCMPD.cc
 ENDP IiyVPCMPNEQUD::
↑ VPCMPNLTUD
Compare if Not Less Than Packed Unsigned DWORD values into mask
Intel reference
VPCMPNLTUD k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 1E /r 05
VPCMPNLTUD k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 1E /r 05
VPCMPNLTUD k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 1E /r 05
VPCMPNLTUD k1 {k2}, zmm2, zmm3/m512 MVEX.NDS.512.66.0F3A.W0 1E /r 05
Opcode
1E /r 0x05
Tested by
t5796
IiyVPCMPNLTUD:: PROC
    MOV AL,0x1E
    MOV CL,0x05
    JMP IiyVPCMPD.cc
 ENDP IiyVPCMPNLTUD::
↑ VPCMPNLEUD
Compare if Not Less than or Equal Packed Unsigned DWORD values into mask
Intel reference
VPCMPNLEUD k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 1E /r 06
VPCMPNLEUD k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 1E /r 06
VPCMPNLEUD k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 1E /r 06
VPCMPNLEUD k1 {k2}, zmm2, zmm3/m512 MVEX.NDS.512.66.0F3A.W0 1E /r 06
Opcode
1E /r 0x06
Tested by
t5796
IiyVPCMPNLEUD:: PROC
    MOV AL,0x1E
    MOV CL,0x06
    JMP IiyVPCMPD.cc
 ENDP IiyVPCMPNLEUD::
↑ VPCMPTRUEUD
Compare if True Packed Unsigned DWORD values into mask
Intel reference
VPCMPTRUEUD k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W0 1E /r 07
VPCMPTRUEUD k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W0 1E /r 07
VPCMPTRUEUD k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W0 1E /r 07
Opcode
1E /r 0x07
Tested by
t5796
IiyVPCMPTRUEUD:: PROC
    MOV AL,0x1E
    MOV CL,0x07
    JMP IiyVPCMPD.cc
 ENDP IiyVPCMPTRUEUD::
↑ VPCMPEQQ
Compare if Equal Packed signed QWORD values into mask
Intel reference
VPCMPEQQ xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F38.WIG 29 /r
VPCMPEQQ ymm1, ymm2, ymm3 /m256 VEX.NDS.256.66.0F38.WIG 29 /r
VPCMPEQQ k1 {k2}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F38.W1 29 /r
VPCMPEQQ k1 {k2}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F38.W1 29 /r
VPCMPEQQ k1 {k2}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F38.W1 29 /r
VPCMPEQQ k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 1F /r 00
VPCMPEQQ k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 1F /r 00
VPCMPEQQ k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 1F /r 00
Opcode
1F /r 0x00
Tested by
t5796 t5797
IiyVPCMPEQQ:: PROC
    MOV EBX,EDX
    MOV AL,0x1F
    MOV CL,0x00
    SHR EBX,16
    CMP BL,krg
    JNE .S:
    IiDispatchCode  SHORT=.S:, LONG=IiyVPCMPQ.cc:
 .S:IiEncoding CODE=SHORT,DATA=QWORD
    IiAllowModifier CODE
    IiAllowMaskMerging
    IiAllowBroadcasting QWORD
    IiEmitOpcode 0x29
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV64
    IiDisp8MVEX Si64
    IiDispatchFormat xmm.xmm.xmm,xmm.xmm.mem,ymm.ymm.ymm,ymm.ymm.mem, \
          krg.xmm.xmm,krg.xmm.mem,krg.ymm.ymm,krg.ymm.mem,krg.zmm.zmm,krg.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F38.WIG
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F38.WIG
    RET
.krg.xmm.xmm:
.krg.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F38.W1
    RET
.krg.ymm.ymm:
.krg.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F38.W1 
    RET
.krg.zmm.zmm:
.krg.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W1
    RET
 ENDP IiyVPCMPEQQ::
↑ VPCMPLTQ
Compare if Less Than Packed signed QWORD values into mask
Intel reference
VPCMPLTQ k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 1F /r 01
VPCMPLTQ k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 1F /r 01
VPCMPLTQ k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 1F /r 01
Opcode
1F /r 0x01
Tested by
t5796
IiyVPCMPLTQ:: PROC
    MOV AL,0x1F
    MOV CL,0x01
    JMP IiyVPCMPQ.cc:
 ENDP IiyVPCMPLTQ::
↑ VPCMPLEQ
Compare if Less than or Equal Packed signed QWORD values into mask
Intel reference
VPCMPLEQ k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 1F /r 02
VPCMPLEQ k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 1F /r 02
VPCMPLEQ k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 1F /r 02
Opcode
1F /r 0x02
Tested by
t5796
IiyVPCMPLEQ:: PROC
    MOV AL,0x1F
    MOV CL,0x02
    JMP IiyVPCMPQ.cc:
 ENDP IiyVPCMPLEQ::
↑ VPCMPFALSEQ
Compare if False Packed signed QWORD values into mask
Intel reference
VPCMPFALSEQ k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 1F /r 03
VPCMPFALSEQ k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 1F /r 03
VPCMPFALSEQ k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 1F /r 03
Opcode
1F /r 0x03
Tested by
t5796
IiyVPCMPFALSEQ:: PROC
    MOV AL,0x1F
    MOV CL,0x03
    JMP IiyVPCMPQ.cc:
 ENDP IiyVPCMPFALSEQ::
↑ VPCMPNEQQ
Compare if Not Equal Packed signed QWORD values into mask
Intel reference
VPCMPNEQQ k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 1F /r 04
VPCMPNEQQ k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 1F /r 04
VPCMPNEQQ k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 1F /r 04
Opcode
1F /r 0x04
Tested by
t5796
IiyVPCMPNEQQ:: PROC
    MOV AL,0x1F
    MOV CL,0x04
    JMP IiyVPCMPQ.cc:
 ENDP IiyVPCMPNEQQ::
↑ VPCMPNLTQ
Compare if Not Less Than Packed signed QWORD values into mask
Intel reference
VPCMPNLTQ k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 1F /r 05
VPCMPNLTQ k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 1F /r 05
VPCMPNLTQ k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 1F /r 05
Opcode
1F /r 0x05
Tested by
t5796
IiyVPCMPNLTQ:: PROC
    MOV AL,0x1F
    MOV CL,0x05
    JMP IiyVPCMPQ.cc:
 ENDP IiyVPCMPNLTQ::
↑ VPCMPNLEQ
Compare if Not Less than or Equal Packed signed QWORD values into mask
Intel reference
VPCMPNLEQ k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 1F /r 06
VPCMPNLEQ k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 1F /r 06
VPCMPNLEQ k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 1F /r 06
Opcode
1F /r 0x06
Tested by
t5796
IiyVPCMPNLEQ:: PROC
    MOV AL,0x1F
    MOV CL,0x06
    JMP IiyVPCMPQ.cc:
 ENDP IiyVPCMPNLEQ::
↑ VPCMPTRUEQ
Compare if True Packed signed QWORD values into mask
Intel reference
VPCMPTRUEQ k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 1F /r 07
VPCMPTRUEQ k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 1F /r 07
VPCMPTRUEQ k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 1F /r 07
Opcode
1F /r 0x07
Tested by
t5796
IiyVPCMPTRUEQ:: PROC
    MOV AL,0x1F
    MOV CL,0x07
    JMP IiyVPCMPQ.cc:
 ENDP IiyVPCMPTRUEQ::
↑ VPCMPEQUQ
Compare if Equal Packed Unsigned QWORD values into mask
Intel reference
VPCMPEQUQ k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 1E /r 00
VPCMPEQUQ k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 1E /r 00
VPCMPEQUQ k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 1E /r 00
Opcode
1E /r 0x00
Tested by
t5796
IiyVPCMPEQUQ:: PROC
    MOV AL,0x1E
    MOV CL,0x00
    JMP IiyVPCMPQ.cc
 ENDP IiyVPCMPEQUQ::
↑ VPCMPLTUQ
Compare if Less Than Packed Unsigned QWORD values into mask
Intel reference
VPCMPLTUQ k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 1E /r 01
VPCMPLTUQ k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 1E /r 01
VPCMPLTUQ k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 1E /r 01
Opcode
1E /r 0x01
Tested by
t5796
IiyVPCMPLTUQ:: PROC
    MOV AL,0x1E
    MOV CL,0x01
    JMP IiyVPCMPQ.cc
 ENDP IiyVPCMPLTUQ::
↑ VPCMPLEUQ
Compare if Less than or Equal Packed Unsigned QWORD values into mask
Intel reference
VPCMPLEUQ k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 1E /r 02
VPCMPLEUQ k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 1E /r 02
VPCMPLEUQ k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 1E /r 02
Opcode
1E /r 0x02
Tested by
t5796
IiyVPCMPLEUQ:: PROC
    MOV AL,0x1E
    MOV CL,0x02
    JMP IiyVPCMPQ.cc
 ENDP IiyVPCMPLEUQ::
↑ VPCMPFALSEUQ
Compare if False Packed Unsigned QWORD values into mask
Intel reference
VPCMPFALSEUQ k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 1E /r 03
VPCMPFALSEUQ k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 1E /r 03
VPCMPFALSEUQ k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 1E /r 03
Opcode
1E /r 0x03
Tested by
t5796
IiyVPCMPFALSEUQ:: PROC
    MOV AL,0x1E
    MOV CL,0x03
    JMP IiyVPCMPQ.cc
 ENDP IiyVPCMPFALSEUQ::
↑ VPCMPNEQUQ
Compare if Not Equal Packed Unsigned QWORD values into mask
Intel reference
VPCMPNEQUQ k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 1E /r 04
VPCMPNEQUQ k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 1E /r 04
VPCMPNEQUQ k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 1E /r 04
Opcode
1E /r 0x04
Tested by
t5796
IiyVPCMPNEQUQ:: PROC
    MOV AL,0x1E
    MOV CL,0x04
    JMP IiyVPCMPQ.cc
 ENDP IiyVPCMPNEQUQ::
↑ VPCMPNLTUQ
Compare if Not Less Than Packed Unsigned QWORD values into mask
Intel reference
VPCMPNLTUQ k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 1E /r 05
VPCMPNLTUQ k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 1E /r 05
VPCMPNLTUQ k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 1E /r 05
Opcode
1E /r 0x05
Tested by
t5796
IiyVPCMPNLTUQ:: PROC
    MOV AL,0x1E
    MOV CL,0x05
    JMP IiyVPCMPQ.cc
 ENDP IiyVPCMPNLTUQ::
↑ VPCMPNLEUQ
Compare if Not Less than or Equal Packed Unsigned QWORD values into mask
Intel reference
VPCMPNLEUQ k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 1E /r 06
VPCMPNLEUQ k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 1E /r 06
VPCMPNLEUQ k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 1E /r 06
Opcode
1E /r 0x06
Tested by
t5796
IiyVPCMPNLEUQ:: PROC
    MOV AL,0x1E
    MOV CL,0x06
    JMP IiyVPCMPQ.cc
 ENDP IiyVPCMPNLEUQ::
↑ VPCMPTRUEUQ
Compare if True Packed Unsigned QWORD values into mask
Intel reference
VPCMPTRUEUQ k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F3A.W1 1E /r 07
VPCMPTRUEUQ k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F3A.W1 1E /r 07
VPCMPTRUEUQ k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F3A.W1 1E /r 07
Opcode
1E /r 0x07
Tested by
t5796
IiyVPCMPTRUEUQ:: PROC
    MOV AL,0x1E
    MOV CL,0x07
    JMP IiyVPCMPQ.cc
 ENDP IiyVPCMPTRUEUQ::
↑ VPCMPGTB
Compare Packed Signed Integers for Greater Than
Intel reference
VPCMPGTB xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F.WIG 64 /r
VPCMPGTB ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F.WIG 64 /r
VPCMPGTB k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F.WIG 64 /r
VPCMPGTB k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F.WIG 64 /r
VPCMPGTB k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F.WIG 64 /r
Category
mmx,compar
Operands
Pq,Qd | Vdq,Wdq
Opcode
0x0F64 /r | 0x660F64 /r
CPU
PX+
Tested by
t5798
IiyVPCMPGTB:: PROC
    IiEncoding DATA=BYTE
    IiAllowMaskMerging
    IiEmitOpcode 0x64
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FVM
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, krg.xmm.xmm, krg.xmm.mem, krg.ymm.ymm, krg.ymm.mem, krg.zmm.zmm, krg.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F.WIG
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F.WIG
    RET
.krg.xmm.xmm:
.krg.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F.WIG
    RET
.krg.ymm.ymm:
.krg.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F.WIG
    RET
.krg.zmm.zmm:
.krg.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F.WIG
    RET
  ENDP IiyVPCMPGTB::
↑ VPCMPGTW
Compare Packed Signed Integers for Greater Than
Intel reference
VPCMPGTW xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F.WIG 65 /r
VPCMPGTW ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F.WIG 65 /r
VPCMPGTW k1 {k2}, xmm2, xmm3/m128 EVEX.NDS.128.66.0F.WIG 65 /r
VPCMPGTW k1 {k2}, ymm2, ymm3/m256 EVEX.NDS.256.66.0F.WIG 65 /r
VPCMPGTW k1 {k2}, zmm2, zmm3/m512 EVEX.NDS.512.66.0F.WIG 65 /r
Category
mmx,compar
Operands
Pq,Qd | Vdq,Wdq
Opcode
0x0F65 /r | 0x660F65 /r
CPU
PX+
Tested by
t5798
IiyVPCMPGTW:: PROC
    IiEncoding DATA=WORD
    IiAllowMaskMerging
    IiEmitOpcode 0x65
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FVM
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, krg.xmm.xmm, krg.xmm.mem, krg.ymm.ymm, krg.ymm.mem, krg.zmm.zmm, krg.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F.WIG
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F.WIG
    RET
.krg.xmm.xmm:
.krg.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F.WIG
    RET
.krg.ymm.ymm:
.krg.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F.WIG
    RET
.krg.zmm.zmm:
.krg.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F.WIG
    RET
  ENDP IiyVPCMPGTW::
↑ VPCMPGTD
Compare Packed Signed Integers for Greater Than
Intel reference
VPCMPGTD xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F.WIG 66 /r
VPCMPGTD ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F.WIG 66 /r
VPCMPGTD k1 {k2}, xmm2, xmm3/m128/m32bcst EVEX.NDS.128.66.0F.W0 66 /r
VPCMPGTD k1 {k2}, ymm2, ymm3/m256/m32bcst EVEX.NDS.256.66.0F.W0 66 /r
VPCMPGTD k1 {k2}, zmm2, zmm3/m512/m32bcst EVEX.NDS.512.66.0F.W0 66 /r
VPCMPGTD k2 {k1}, zmm1, zmm2/m512/m32bcst MVEX.NDS.512.66.0F.W0 66 /r
Category
mmx,compar
Operands
Pq,Qd | Vdq,Wdq
Opcode
0x0F66 /r | 0x660F66 /r
CPU
PX+
Tested by
t5798
IiyVPCMPGTD:: PROC
    IiAllowMaskMerging
    IiAllowBroadcasting DWORD
    IiEmitOpcode 0x66
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV32
    IiDisp8MVEX Si32
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, krg.xmm.xmm, krg.xmm.mem, krg.ymm.ymm, krg.ymm.mem, krg.zmm.zmm, krg.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F.WIG
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F.WIG
    RET
.krg.xmm.xmm:
.krg.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F.W0
    RET
.krg.ymm.ymm:
.krg.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F.W0
    RET
.krg.zmm.zmm:
.krg.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F.W0, MVEX.NDS.512.66.0F.W0
    RET
  ENDP IiyVPCMPGTD::
↑ VPCMPGTQ
Compare Packed Qword Data for Greater Than
Intel reference
VPCMPGTQ xmm1, xmm2, xmm3/m128 VEX.NDS.128.66.0F38.WIG 37 /r
VPCMPGTQ ymm1, ymm2, ymm3/m256 VEX.NDS.256.66.0F38.WIG 37 /r
VPCMPGTQ k1 {k2}, xmm2, xmm3/m128/m64bcst EVEX.NDS.128.66.0F38.W1 37 /r
VPCMPGTQ k1 {k2}, ymm2, ymm3/m256/m64bcst EVEX.NDS.256.66.0F38.W1 37 /r
VPCMPGTQ k1 {k2}, zmm2, zmm3/m512/m64bcst EVEX.NDS.512.66.0F38.W1 37 /r
Category
sse42,simdint,compar
Operands
Vdq,Wdq
Opcode
0x660F3837 /r
CPU
C2++
Documented
D43
Tested by
t5798
IiyVPCMPGTQ:: PROC
    IiAllowMaskMerging 
    IiAllowBroadcasting QWORD
    IiEmitOpcode 0x37
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV64
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, krg.xmm.xmm, krg.xmm.mem, krg.ymm.ymm, krg.ymm.mem, krg.zmm.zmm, krg.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix VEX.NDS.128.66.0F38.WIG
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix VEX.NDS.256.66.0F38.WIG
    RET
.krg.xmm.xmm:
.krg.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F38.W1
    RET
.krg.ymm.ymm:
.krg.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F38.W1
    RET
.krg.zmm.zmm:
.krg.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W1
    RET
  ENDP IiyVPCMPGTQ::
↑ VPMOVM2B
Convert a Mask Register to a Vector Register
Intel reference
VPMOVM2B xmm1, k1 EVEX.128.F3.0F38.W0 28 /r
VPMOVM2B ymm1, k1 EVEX.256.F3.0F38.W0 28 /r
VPMOVM2B zmm1, k1 EVEX.512.F3.0F38.W0 28 /r
Opcode
0x28
Tested by
t5802
IiyVPMOVM2B:: PROC
    IiEmitOpcode 0x28
.op:IiOpEn RM
    IiModRM /r
    IiDispatchFormat  xmm.krg, ymm.krg, zmm.krg
.xmm.krg:
    IiEmitPrefix EVEX.128.F3.0F38.W0
    RET
.ymm.krg:
    IiEmitPrefix EVEX.256.F3.0F38.W0
    RET
.zmm.krg:
    IiEmitPrefix EVEX.512.F3.0F38.W0
    RET
  ENDP IiyVPMOVM2B::
↑ VPMOVM2W
Convert a Mask Register to a Vector Register
Intel reference
VPMOVM2W xmm1, k1 EVEX.128.F3.0F38.W1 28 /r
VPMOVM2W ymm1, k1 EVEX.256.F3.0F38.W1 28 /r
VPMOVM2W zmm1, k1 EVEX.512.F3.0F38.W1 28 /r
Opcode
0x28
Tested by
t5802
IiyVPMOVM2W:: PROC
    IiEmitOpcode 0x28
.op:IiOpEn RM
    IiModRM /r
    IiDispatchFormat  xmm.krg, ymm.krg, zmm.krg
.xmm.krg:
    IiEmitPrefix EVEX.128.F3.0F38.W1
    RET
.ymm.krg:
    IiEmitPrefix EVEX.256.F3.0F38.W1
    RET
.zmm.krg:
    IiEmitPrefix EVEX.512.F3.0F38.W1
    RET
  ENDP IiyVPMOVM2W::
↑ VPMOVM2D
Convert a Mask Register to a Vector Register
Intel reference
VPMOVM2D xmm1, k1 EVEX.128.F3.0F38.W0 38 /r
VPMOVM2D ymm1, k1 EVEX.256.F3.0F38.W0 38 /r
VPMOVM2D zmm1, k1 EVEX.512.F3.0F38.W0 38 /r
Opcode
0x38
Tested by
t5802
IiyVPMOVM2D:: PROC
    IiEmitOpcode 0x38
    JMP IiyVPMOVM2B.op:
  ENDP IiyVPMOVM2D::
↑ VPMOVM2Q
Convert a Mask Register to a Vector Register
Intel reference
VPMOVM2Q xmm1, k1 EVEX.128.F3.0F38.W1 38 /r
VPMOVM2Q ymm1, k1 EVEX.256.F3.0F38.W1 38 /r
VPMOVM2Q zmm1, k1 EVEX.512.F3.0F38.W1 38 /r
Opcode
0x38
Tested by
t5802
IiyVPMOVM2Q:: PROC
    IiEmitOpcode 0x38
    JMP IiyVPMOVM2W.op:
  ENDP IiyVPMOVM2Q::
↑ VPMOVB2M
Convert a Vector Register to a Mask
Intel reference
VPMOVB2M k1, xmm1 EVEX.128.F3.0F38.W0 29 /r
VPMOVB2M k1, ymm1 EVEX.256.F3.0F38.W0 29 /r
VPMOVB2M k1, zmm1 EVEX.512.F3.0F38.W0 29 /r
Opcode
0x29
Tested by
t5804
IiyVPMOVB2M:: PROC
    IiEmitOpcode 0x29
.op:IiOpEn RM
    IiModRM /r
    IiDispatchFormat  krg.xmm, krg.ymm, krg.zmm
.krg.xmm:
    IiEmitPrefix EVEX.128.F3.0F38.W0
    RET
.krg.ymm:
    IiEmitPrefix EVEX.256.F3.0F38.W0
    RET
.krg.zmm:
    IiEmitPrefix EVEX.512.F3.0F38.W0
    RET
  ENDP IiyVPMOVB2M::
↑ VPMOVW2M
Convert a Vector Register to a Mask
Intel reference
VPMOVW2M k1, xmm1 EVEX.128.F3.0F38.W1 29 /r
VPMOVW2M k1, ymm1 EVEX.256.F3.0F38.W1 29 /r
VPMOVW2M k1, zmm1 EVEX.512.F3.0F38.W1 29 /r
Opcode
0x29
Tested by
t5804
IiyVPMOVW2M:: PROC
    IiEmitOpcode 0x29
.op:IiOpEn RM
    IiModRM /r
    IiDispatchFormat  krg.xmm, krg.ymm, krg.zmm
.krg.xmm:
    IiEmitPrefix EVEX.128.F3.0F38.W1
    RET
.krg.ymm:
    IiEmitPrefix EVEX.256.F3.0F38.W1
    RET
.krg.zmm:
    IiEmitPrefix EVEX.512.F3.0F38.W1
    RET
  ENDP IiyVPMOVW2M::
↑ VPMOVD2M
Convert a Vector Register to a Mask
Intel reference
VPMOVD2M k1, xmm1 EVEX.128.F3.0F38.W0 39 /r
VPMOVD2M k1, ymm1 EVEX.256.F3.0F38.W0 39 /r
VPMOVD2M k1, zmm1 EVEX.512.F3.0F38.W0 39 /r
Opcode
0x39
Tested by
t5804
IiyVPMOVD2M:: PROC
    IiEmitOpcode 0x39
    JMP IiyVPMOVB2M.op:
  ENDP IiyVPMOVD2M::
↑ VPMOVQ2M
Convert a Vector Register to a Mask
Intel reference
VPMOVQ2M k1, xmm1 EVEX.128.F3.0F38.W1 39 /r
VPMOVQ2M k1, ymm1 EVEX.256.F3.0F38.W1 39 /r
VPMOVQ2M k1, zmm1 EVEX.512.F3.0F38.W1 39 /r
Opcode
0x39
Tested by
t5804
IiyVPMOVQ2M:: PROC
    IiEmitOpcode 0x39
    JMP IiyVPMOVW2M.op:
  ENDP IiyVPMOVQ2M::
↑ VPBROADCASTMW2D
Broadcast Mask to Vector Register
Intel reference
VPBROADCASTMW2D xmm1, k1 EVEX.128.F3.0F38.W0 3A /r
VPBROADCASTMW2D ymm1, k1 EVEX.256.F3.0F38.W0 3A /r
VPBROADCASTMW2D zmm1, k1 EVEX.512.F3.0F38.W0 3A /r
Opcode
0x3A
Tested by
t5806
IiyVPBROADCASTMW2D:: PROC
    IiEmitOpcode 0x3A
    JMP IiyVPMOVM2B.op:
  ENDP IiyVPBROADCASTMW2D::
↑ VPBROADCASTMB2Q
Broadcast Mask to Vector Register
Intel reference
VPBROADCASTMB2Q xmm1, k1 EVEX.128.F3.0F38.W1 2A /r
VPBROADCASTMB2Q ymm1, k1 EVEX.256.F3.0F38.W1 2A /r
VPBROADCASTMB2Q zmm1, k1 EVEX.512.F3.0F38.W1 2A /r
Opcode
0x2A
Tested by
t5806
IiyVPBROADCASTMB2Q:: PROC
    IiEmitOpcode 0x2A
    JMP IiyVPMOVM2W.op:
  ENDP IiyVPBROADCASTMB2Q::
↑ VPCONFLICTD
Detect Conflicts Within a Vector of Packed Dword Values into Dense Memory/ Register
Intel reference
VPCONFLICTD xmm1 {k1}{z}, xmm2/m128/m32bcst EVEX.128.66.0F38.W0 C4 /r
VPCONFLICTD ymm1 {k1}{z}, ymm2/m256/m32bcst EVEX.256.66.0F38.W0 C4 /r
VPCONFLICTD zmm1 {k1}{z}, zmm2/m512/m32bcst EVEX.512.66.0F38.W0 C4 /r
Opcode
0xC4
Tested by
t5806
IiyVPCONFLICTD:: PROC
    IiAllowModifier MASK
    IiAllowBroadcasting DWORD
    IiEmitOpcode 0xC4
    IiOpEn RM
    IiModRM /r
    IiDisp8EVEX FV32
    IiDispatchFormat  xmm.xmm, xmm.mem, ymm.ymm, ymm.mem, zmm.zmm, zmm.mem
.xmm.xmm:
.xmm.mem:
    IiEmitPrefix EVEX.128.66.0F38.W0
    RET
.ymm.ymm:
.ymm.mem:
    IiEmitPrefix EVEX.256.66.0F38.W0
    RET
.zmm.zmm:
.zmm.mem:
    IiEmitPrefix EVEX.512.66.0F38.W0
    RET
  ENDP IiyVPCONFLICTD::
↑ VPCONFLICTQ
Detect Conflicts Within a Vector of Packed Qword Values into Dense Memory/ Register
Intel reference
VPCONFLICTQ xmm1 {k1}{z}, xmm2/m128/m64bcst EVEX.128.66.0F38.W1 C4 /r
VPCONFLICTQ ymm1 {k1}{z}, ymm2/m256/m64bcst EVEX.256.66.0F38.W1 C4 /r
VPCONFLICTQ zmm1 {k1}{z}, zmm2/m512/m64bcst EVEX.512.66.0F38.W1 C4 /r
Opcode
0xC4
Tested by
t5806
IiyVPCONFLICTQ:: PROC
    IiAllowModifier MASK
    IiAllowBroadcasting QWORD
    IiEmitOpcode 0xC4
    IiOpEn RM
    IiModRM /r
    IiDisp8EVEX FV64
    IiDispatchFormat  xmm.xmm, xmm.mem, ymm.ymm, ymm.mem, zmm.zmm, zmm.mem
.xmm.xmm:
.xmm.mem:
    IiEmitPrefix EVEX.128.66.0F38.W1
    RET
.ymm.ymm:
.ymm.mem:
    IiEmitPrefix EVEX.256.66.0F38.W1
    RET
.zmm.zmm:
.zmm.mem:
    IiEmitPrefix EVEX.512.66.0F38.W1
    RET
  ENDP IiyVPCONFLICTQ::
↑ VPMULTISHIFTQB
Select Packed Unaligned Bytes from Quadword Sources
Intel reference
VPMULTISHIFTQB xmm1 {k1}{z}, xmm2,xmm3/m128/m64bcst EVEX.NDS.128.66.0F38.W1 83 /r
VPMULTISHIFTQB ymm1 {k1}{z}, ymm2,ymm3/m256/m64bcst EVEX.NDS.256.66.0F38.W1 83 /r
VPMULTISHIFTQB zmm1 {k1}{z}, zmm2,zmm3/m512/m64bcst EVEX.NDS.512.66.0F38.W1 83 /r
Opcode
0x83
Tested by
t5806
IiyVPMULTISHIFTQB:: PROC
    IiAllowModifier MASK
    IiAllowBroadcasting QWORD
    IiEmitOpcode 0x83
    IiOpEn RVM
    IiModRM /r
    IiDisp8EVEX FV64
    IiDispatchFormat  xmm.xmm.xmm, xmm.xmm.mem, ymm.ymm.ymm, ymm.ymm.mem, zmm.zmm.zmm, zmm.zmm.mem
.xmm.xmm.xmm:
.xmm.xmm.mem:
    IiEmitPrefix EVEX.NDS.128.66.0F38.W1
    RET
.ymm.ymm.ymm:
.ymm.ymm.mem:
    IiEmitPrefix EVEX.NDS.256.66.0F38.W1
    RET
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix EVEX.NDS.512.66.0F38.W1
    RET
  ENDP IiyVPMULTISHIFTQB::
↑ VLOADUNPACKLD
Load Unaligned Low And Unpack To Doubleword Vector
Intel reference
VLOADUNPACKLD zmm1 {k1}, Ui32(mt)MVEX.512.0F38.W0 D0 /r
Opcode
0xD0
Tested by
t6100
IiyVLOADUNPACKLD:: PROC
    IiEmitOpcode 0xD0
.Di:IiDisp8MVEX Di32
.op:IiAllowMaskMerging
    IiOpEn RM
    IiModRM /r
    IiDispatchFormat  zmm.mem
.zmm.mem:
    IiEmitPrefix MVEX.512.0F38.W0
    RET
  ENDP IiyVLOADUNPACKLD::
↑ VLOADUNPACKLPS
Load Unaligned Low And Unpack To Float32 Vector
Intel reference
VLOADUNPACKLPS zmm1 {k1}, Uf32(mt)MVEX.512.0F38.W0 D1 /r
Opcode
0xD1
Tested by
t6100
IiyVLOADUNPACKLPS:: PROC
    IiEmitOpcode 0xD1
    IiDisp8MVEX Df32
    JMP IiyVLOADUNPACKLD.op:
  ENDP IiyVLOADUNPACKLPS::
↑ VLOADUNPACKHD
Load Unaligned High And Unpack To Doubleword Vector
Intel reference
VLOADUNPACKHD zmm1 {k1}, Ui32(mt)MVEX.512.0F38.W0 D4 /r
Opcode
0xD4
Tested by
t6102
IiyVLOADUNPACKHD:: PROC
    IiEmitOpcode 0xD4
    JMP IiyVLOADUNPACKLD.Di:
  ENDP IiyVLOADUNPACKHD::
↑ VLOADUNPACKHPS
Load Unaligned High And Unpack To Float32 Vector
Intel reference
VLOADUNPACKHPS zmm1 {k1}, Uf32(mt)MVEX.512.0F38.W0 D5 /r
Opcode
0xD5
Tested by
t6102
IiyVLOADUNPACKHPS:: PROC
    IiEmitOpcode 0xD5
    IiDisp8MVEX Df32
    JMP IiyVLOADUNPACKLD.op:
  ENDP IiyVLOADUNPACKHPS::
↑ VLOADUNPACKLQ
Load Unaligned Low And Unpack To Int64 Vector
Intel reference
VLOADUNPACKLQ zmm1 {k1}, Ui64(mt)MVEX.512.0F38.W1 D0 /r
Opcode
0xD0
Tested by
t6100
IiyVLOADUNPACKLQ:: PROC
    IiEmitOpcode 0xD0
.op:IiDisp8MVEX Sn64
    IiAllowMaskMerging
    IiOpEn RM
    IiModRM /r
    IiDispatchFormat  zmm.mem
.zmm.mem:
    IiEmitPrefix MVEX.512.0F38.W1
    RET
  ENDP IiyVLOADUNPACKLQ::
↑ VLOADUNPACKLPD
Load Unaligned Low And Unpack To Float64 Vector
Intel reference
VLOADUNPACKLPD zmm1 {k1}, Uf64(mt)MVEX.512.0F38.W1 D1 /r
Opcode
0xD1
Tested by
t6100
IiyVLOADUNPACKLPD:: PROC
    IiEmitOpcode 0xD1
    JMP IiyVLOADUNPACKLQ.op:
  ENDP IiyVLOADUNPACKLPD::
↑ VLOADUNPACKHQ
Load Unaligned High And Unpack To Int64 Vector
Intel reference
VLOADUNPACKHQ zmm1 {k1}, Ui64(mt)MVEX.512.0F38.W1 D4 /r
Opcode
0xD4
Tested by
t6102
IiyVLOADUNPACKHQ:: PROC
    IiEmitOpcode 0xD4
    JMP IiyVLOADUNPACKLQ.op:
  ENDP IiyVLOADUNPACKHQ::
↑ VLOADUNPACKHPD
Load Unaligned High And Unpack To Float64 Vector
Intel reference
VLOADUNPACKHPD zmm1 {k1}, Uf64(mt)MVEX.512.0F38.W1 D5 /r
Opcode
0xD5
Tested by
t6102
IiyVLOADUNPACKHPD:: PROC
    IiEmitOpcode 0xD5
    JMP IiyVLOADUNPACKLQ.op:
  ENDP IiyVLOADUNPACKHPD::
↑ VPACKSTORELD
Pack and Store Unaligned Low From Int32 Vector
Intel reference
VPACKSTORELD mt {k1}, Di32(zmm1)MVEX.512.66.0F38.W0 D0 /r
Opcode
0xD0
Tested by
t6104
IiyVPACKSTORELD:: PROC
    IiEmitOpcode 0xD0
.Di:IiDisp8MVEX Di32    
.op:IiAllowMaskMerging
    IiOpEn MR
    IiModRM /r
    IiDispatchFormat  mem.zmm
.mem.zmm:
    IiEmitPrefix MVEX.512.66.0F38.W0
    RET
  ENDP IiyVPACKSTORELD::
↑ VPACKSTORELPS
Pack and Store Unaligned Low From Float32 Vector
Intel reference
VPACKSTORELPS mt {k1}, Df32(zmm1)MVEX.512.66.0F38.W0 D1 /r
Opcode
0xD1
Tested by
t6104
IiyVPACKSTORELPS:: PROC
    IiEmitOpcode 0xD1
    IiDisp8MVEX Df32
    JMP IiyVPACKSTORELD.op:
  ENDP IiyVPACKSTORELPS::
↑ VPACKSTOREHD
Pack And Store Unaligned High From Int32 Vector
Intel reference
VPACKSTOREHD mt {k1}, Di32(zmm1)MVEX.512.66.0F38.W0 D4 /r
Opcode
0xD4
Tested by
t6106
IiyVPACKSTOREHD:: PROC
    IiEmitOpcode 0xD4
    JMP IiyVPACKSTORELD.Di:
  ENDP IiyVPACKSTOREHD::
↑ VPACKSTOREHPS
Pack And Store Unaligned High From Float32 Vector
Intel reference
VPACKSTOREHPS mt {k1}, Df32(zmm1)MVEX.512.66.0F38.W0 D5 /r
Opcode
0xD5
Tested by
t6106
IiyVPACKSTOREHPS:: PROC
    IiEmitOpcode 0xD5
    IiDisp8MVEX Df32
    JMP IiyVPACKSTORELD.op:
  ENDP IiyVPACKSTOREHPS::
↑ VPACKSTORELQ
Pack and Store Unaligned Low From Int64 Vector
Intel reference
VPACKSTORELQ mt {k1}, Di64(zmm1)MVEX.512.66.0F38.W1 D0 /r
Opcode
0xD0
Tested by
t6104
IiyVPACKSTORELQ:: PROC
    IiEmitOpcode 0xD0
.op:IiDisp8MVEX Sn64
    IiAllowMaskMerging
    IiOpEn MR
    IiModRM /r
    IiDispatchFormat  mem.zmm
.mem.zmm:
    IiEmitPrefix MVEX.512.66.0F38.W1
    RET
  ENDP IiyVPACKSTORELQ::
↑ VPACKSTORELPD
Pack and Store Unaligned Low From Float64 Vector
Intel reference
VPACKSTORELPD mt {k1}, Df64(zmm1)MVEX.512.66.0F38.W1 D1 /r
Opcode
0xD1
Tested by
t6104
IiyVPACKSTORELPD:: PROC
    IiEmitOpcode 0xD1
    JMP IiyVPACKSTORELQ.op:
  ENDP IiyVPACKSTORELPD::
↑ VPACKSTOREHQ
Pack And Store Unaligned High From Int64 Vector
Intel reference
VPACKSTOREHQ mt {k1}, Di64(zmm1)MVEX.512.66.0F38.W1 D4 /r
Operands
0xD4
Tested by
t6106
IiyVPACKSTOREHQ:: PROC
    IiEmitOpcode 0xD4
    JMP IiyVPACKSTORELQ.op:
  ENDP IiyVPACKSTOREHQ::
↑ VPACKSTOREHPD
Pack And Store Unaligned High From Float64 Vector
Intel reference
VPACKSTOREHPD mt {k1}, Df64(zmm1)MVEX.512.66.0F38.W1 D5 /r
Opcode
0xD5
Tested by
t6106
IiyVPACKSTOREHPD:: PROC
    IiEmitOpcode 0xD5
    JMP IiyVPACKSTORELQ.op:
  ENDP IiyVPACKSTOREHPD::
↑ VCVTFXPNTUDQ2PS
Convert Fixed Point Uint32 Vector to Float32 Vector
Intel reference
VCVTFXPNTUDQ2PS zmm1 {k1}, i32(zmm2/mt), imm8 MVEX.512.0F3A.W0 CA /r ib
Opcode
0xCA
Tested by
t6110
IiyVCVTFXPNTUDQ2PS:: PROC
    IiEmitOpcode 0xCA
.op:IiAllowModifier MASK,SAE,EH
    IiOpEn RM
    IiModRM /r
    IiDisp8MVEX Si32
    IiEmitImm Operand3, BYTE, Max=127
    IiDispatchFormat  zmm.zmm.imm, zmm.mem.imm
.zmm.zmm.imm:
.zmm.mem.imm:
    IiEmitPrefix MVEX.512.0F3A.W0
    RET
  ENDP IiyVCVTFXPNTUDQ2PS::
↑ VCVTFXPNTDQ2PS
Convert Fixed Point Int32 Vector to Float32 Vector
Intel reference
VCVTFXPNTDQ2PS zmm1 {k1}, S (zmm2/m ), imm8MVEX.512.0F3A.W0 CB /r ib
Opcode
0xCB
Tested by
t6110
IiyVCVTFXPNTDQ2PS:: PROC
    IiEmitOpcode 0xCB
    JMP IiyVCVTFXPNTUDQ2PS.op:
  ENDP IiyVCVTFXPNTDQ2PS::
↑ VCVTFXPNTPS2UDQ
Convert Float32 Vector to Fixed Point Uint32 Vector
Intel reference
VCVTFXPNTPS2UDQ zmm1 {k1}, Sf32(zmm2/mt), imm8 MVEX.512.66.0F3A.W0 CA /r ib
Opcode
0xCA
Tested by
t6110
IiyVCVTFXPNTPS2UDQ:: PROC
    IiEmitOpcode 0xCA
.op:IiAllowModifier MASK,SAE,EH
    IiOpEn RM
    IiModRM /r
    IiDisp8MVEX Us32
    IiEmitImm Operand3, BYTE, Max=127
    IiDispatchFormat  zmm.zmm.imm, zmm.mem.imm
.zmm.zmm.imm:
.zmm.mem.imm:
    IiEmitPrefix MVEX.512.66.0F3A.W0
    RET
  ENDP IiyVCVTFXPNTPS2UDQ::
↑ VCVTFXPNTPS2DQ
Convert Float32 Vector to Fixed Point Int32 Vector
Intel reference
VCVTFXPNTPS2DQ zmm1 {k1}, Sf32(zmm2/mt), imm8MVEX.512.66.0F3A.W0 CB /r ib
Opcode
0xCB
Tested by
t6110
IiyVCVTFXPNTPS2DQ:: PROC
    IiEmitOpcode 0xCB
    JMP IiyVCVTFXPNTPS2UDQ.op:
  ENDP IiyVCVTFXPNTPS2DQ::
↑ VCVTFXPNTPD2UDQ
Convert Float64 Vector to Fixed Point Uint32 Vector
Intel reference
VCVTFXPNTPD2UDQ zmm1 {k1}, Sf64(zmm2/mt), imm8 MVEX.512.F2.0F3A.W1 CA /r ib
Opcode
0xCA
Tested by
t6110
IiyVCVTFXPNTPD2UDQ:: PROC
    IiEmitOpcode 0xCA
.op:IiAllowModifier MASK,SAE,EH
    IiOpEn RM
    IiModRM /r
    IiEmitImm Operand3, BYTE, Max=3
    IiDisp8MVEX Ub64
    IiDispatchFormat  zmm.zmm.imm, zmm.mem.imm
.zmm.zmm.imm:
.zmm.mem.imm:
    IiEmitPrefix MVEX.512.F2.0F3A.W1
    RET
  ENDP IiyVCVTFXPNTPD2UDQ::
↑ VCVTFXPNTPD2DQ
Convert Float64 Vector to Fixed Point Int32 Vector
Intel reference
VCVTFXPNTPD2DQ zmm1 {k1}, Sf64(zmm2/mt), imm8 MVEX.512.F2.0F3A.W1 E6 /r ib
Opcode
0xE6
Tested by
t6110
IiyVCVTFXPNTPD2DQ:: PROC
    IiEmitOpcode 0xE6
    JMP IiyVCVTFXPNTPD2UDQ.op:
  ENDP IiyVCVTFXPNTPD2DQ::
↑ VRNDFXPNTPS
Round Float32 Vector
Intel reference
VRNDFXPNTPS zmm1 {k1}, Sf32(zmm2/mt), imm8MVEX.512.66.0F3A.W0 52 /r ib
Opcode
0x52
IiyVRNDFXPNTPS:: PROC
    IiAllowMaskMerging
    IiEmitOpcode 0x52
    IiOpEn RM
    IiModRM /r
    IiEmitImm Operand3, BYTE
    IiDisp8MVEX Us32
    IiDispatchFormat  zmm.zmm.imm, zmm.mem.imm
.zmm.zmm.imm:
.zmm.mem.imm:
    IiEmitPrefix MVEX.512.66.0F3A.W0
    RET
  ENDP IiyVRNDFXPNTPS::
↑ VRNDFXPNTPD
Round Float64 Vector
Intel reference
VRNDFXPNTPD zmm1 {k1}, Sf64(zmm2/mt), imm8 MVEX.512.66.0F3A.W1 52 /r ib
Opcode
0x52
IiyVRNDFXPNTPD:: PROC
    IiAllowMaskMerging
    IiEmitOpcode 0x52
    IiOpEn RM
    IiModRM /r
    IiEmitImm Operand3, BYTE
    IiDisp8MVEX Ub64
    IiDispatchFormat  zmm.zmm.imm, zmm.mem.imm
.zmm.zmm.imm:
.zmm.mem.imm:
    IiEmitPrefix MVEX.512.66.0F3A.W1
    RET
  ENDP IiyVRNDFXPNTPD::
↑ VPERMF32X4
Shufe Vector Dqwords
Intel reference
VPERMF32X4 zmm1 {k1}, zmm2/mt, imm8 MVEX.512.66.0F3A.W0 07 /r ib
Opcode
0x07
IiyVPERMF32X4:: PROC
    IiAllowMaskMerging
    IiEmitOpcode 0x07
    IiOpEn RM
    IiModRM /r
    IiEmitImm Operand3, BYTE
    IiDisp8MVEX Di64
    IiDispatchFormat  zmm.zmm.imm, zmm.mem.imm
.zmm.zmm.imm:
.zmm.mem.imm:
    IiEmitPrefix MVEX.512.66.0F3A.W0
    RET
  ENDP IiyVPERMF32X4::
↑ VPADCD
Add Int32 Vectors with Carry
Intel reference
VPADCD zmm1 {k1}, k2, Si32(zmm3/mt) MVEX.NDS.512.66.0F38.W0 5C /r
Opcode
0x5C
Tested by
t6120
IiyVPADCD:: PROC
    IiEmitOpcode 0x5C
.op:IiAllowMaskMerging
    IiOpEn RVM
    IiModRM /r
    IiDisp8MVEX Si32
    IiDispatchFormat  zmm.krg.zmm, zmm.krg.mem
.zmm.krg.zmm:
.zmm.krg.mem:
    IiEmitPrefix MVEX.NDS.512.66.0F38.W0
    RET
  ENDP IiyVPADCD::
↑ VPADDSETCD
Add Int32 Vectors and Set Mask to Carry
Intel reference
VPADDSETCD zmm1 {k1}, k2, Si32(zmm3/mt) MVEX.NDS.512.66.0F38.W0 5D /r
Opcode
0x5D
Tested by
t6120
IiyVPADDSETCD:: PROC
    IiEmitOpcode 0x5D
    JMP IiyVPADCD.op:
  ENDP IiyVPADDSETCD::
↑ VPSBBD
Subtract Int32 Vectors with Borrow
Intel reference
VPSBBD zmm1 {k1}, k2, Si32(zmm3/mt) MVEX.NDS.512.66.0F38.W0 5E /r
Opcode
0x5E
Tested by
t6120
IiyVPSBBD:: PROC
    IiEmitOpcode 0x5E
    JMP IiyVPADCD.op:
  ENDP IiyVPSBBD::
↑ VPSUBRSETBD
Reverse Subtract Int32 Vectors and Set Borrow
Intel reference
VPSUBRSETBD zmm1 {k1}, k2, Si32(zmm3/mt) MVEX.NDS.512.66.0F38.W0 6F /r
Opcode
0x6F
Tested by
t6120
IiyVPSUBRSETBD:: PROC
    IiEmitOpcode 0x6F
    JMP IiyVPADCD.op:
  ENDP IiyVPSUBRSETBD::
↑ VPSUBSETBD
Subtract Int32 Vectors and Set Borrow
Intel reference
VPSUBSETBD zmm1 {k1}, k2, Si32(zmm3/mt) MVEX.NDS.512.66.0F38.W0 5F /r
Opcode
0x5F
Tested by
t6120
IiyVPSUBSETBD:: PROC
    IiEmitOpcode 0x5F
    JMP IiyVPADCD.op:
  ENDP IiyVPSUBSETBD::
↑ VPSUBRD
Reverse Subtract Int32 Vectors
Intel reference
VPSUBRD zmm1 {k1}, zmm2, Si32(zmm3/mt) MVEX.NDS.512.66.0F38.W0 6C /r
Opcode
0x6C
Tested by
t6122
IiyVPSUBRD:: PROC
    IiEmitOpcode 0x6C
.op:IiAllowMaskMerging
    IiOpEn RVM
    IiModRM /r
    IiDisp8MVEX Si32
    IiDispatchFormat  zmm.zmm.zmm, zmm.zmm.mem
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix MVEX.NDS.512.66.0F38.W0
    RET
  ENDP IiyVPSUBRD::
↑ VPSBBRD
Reverse Subtract Int32 Vectors with Borrow
Intel reference
VPSBBRD zmm1 {k1}, k2, Si32(zmm3/mt) MVEX.NDS.512.66.0F38.W0 6E /r
Opcode
0x6E
Tested by
t6122
IiyVPSBBRD:: PROC
    IiEmitOpcode 0x6E
    JMP IiyVPSUBRD.op:
  ENDP IiyVPSBBRD::
↑ VPMULHUD
Multiply Uint32 Vectors And Store High Result
Intel reference
VPMULHUD zmm1 {k1}, zmm2, Si32(zmm3/mt) MVEX.NDS.512.66.0F38.W0 86 /r
Opcode
0x86
Tested by
t6122
IiyVPMULHUD:: PROC
    IiEmitOpcode 0x86
    JMP IiyVPSUBRD.op:
  ENDP IiyVPMULHUD::
↑ VPMULHD
Multiply Int32 Vectors And Store High Result
Intel reference
VPMULHD zmm1 {k1}, zmm2, Si32(zmm3/mt) MVEX.NDS.512.66.0F38.W0 87 /r
Opcode
0x87
Tested by
t6122
IiyVPMULHD:: PROC
    IiEmitOpcode 0x87
    JMP IiyVPSUBRD.op:
  ENDP IiyVPMULHD::
↑ VFIXUPNANPS
FixUp Special Float32 VectorNumbersWithNaNPassthrough
Intel reference
VFIXUPNANPS zmm1 {k1}, zmm2, Si32(zmm3/mt)MVEX.NDS.512.66.0F38.W0 55 /r
Opcode
0x55
Tested by
t6122
IiyVFIXUPNANPS:: PROC
    IiAllowSuppressing
    IiEmitOpcode 0x55
    JMP IiyVPSUBRD.op:
  ENDP IiyVFIXUPNANPS::
↑ VPMADD231D
Multiply First Source By Second Source and Add To Destination Int32 Vectors
Intel reference
VPMADD231D zmm1 {k1}, zmm2, Si32(zmm3/mt)MVEX.NDS.512.66.0F38.W0 B5 /r
Opcode
0xB5
Tested by
t6122
IiyVPMADD231D:: PROC
    IiEmitOpcode 0xB5
    JMP IiyVPSUBRD.op:
  ENDP IiyVPMADD231D::
↑ VPMADD233D
Multiply First Source By Specially Swizzled Second Source and Add To Second Source Int32 Vectors
Intel reference
VPMADD233D zmm1 {k1}, zmm2, Si32(zmm3/mt) MVEX.NDS.512.66.0F38.W0 B4 /r
Opcode
0xB4
Tested by
t6124
IiyVPMADD233D:: PROC
    IiAllowMaskMerging
    IiAllowNoSwizzle
    IiDisp8MVEX Sf32
    IiEmitOpcode 0xB4
    IiOpEn RVM
    IiModRM /r
    IiDispatchFormat  zmm.zmm.zmm, zmm.zmm.mem
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix MVEX.NDS.512.66.0F38.W0
    RET
  ENDP IiyVPMADD233D::
↑ VSCALEPS
Scale Float32 Vectors
Intel reference
VSCALEPS zmm1 {k1}, zmm2, Si32(zmm3/mt) MVEX.NDS.512.66.0F38.W0 84 /r
Opcode
0x84
Tested by
t5710 t6128
IiyVSCALEPS:: PROC
    IiEmitOpcode 0x84
    IiDisp8MVEX Si32
.op:IiAllowModifier MASK,EH,SAE
    IiAllowRounding
    IiOpEn RVM
    IiModRM /r
    IiDispatchFormat  zmm.zmm.zmm, zmm.zmm.mem
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix MVEX.NDS.512.66.0F38.W0
    RET
  ENDP IiyVSCALEPS::
↑ VADDNPS
Add and Negate Float32 Vectors
Intel reference
VADDNPS zmm1 {k1}, zmm2, Sf32(zmm3/mt) MVEX.NDS.512.66.0F38.W0 50 /r
Opcode
0x50
Tested by
t6128
IiyVADDNPS:: PROC
    IiEmitOpcode 0x50
    IiDisp8MVEX Us32
    JMP IiyVSCALEPS.op:
  ENDP IiyVADDNPS::
↑ VADDNPD
Add and Negate Float64 Vectors
Intel reference
VADDNPD zmm1 {k1}, zmm2, Sf64(zmm3/mt) MVEX.NDS.512.66.0F38.W1 50 /r
Opcode
0x50
Tested by
t6128 t6130
IiyVADDNPD:: PROC
    IiEmitOpcode 0x50
.op:IiDisp8MVEX Ub64
    IiAllowMaskMerging
    IiAllowSuppressing
    IiAllowRounding
    IiOpEn RVM
    IiModRM /r
    IiDispatchFormat  zmm.zmm.zmm, zmm.zmm.mem
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix MVEX.NDS.512.66.0F38.W1
    RET
  ENDP IiyVADDNPD::
↑ VSUBRPD
Reverse Subtract Float64 Vectors
Intel reference
VSUBRPD zmm1 {k1}, zmm2, Sf64(zmm3/mt) MVEX.NDS.512.66.0F38.W1 6D /r
Opcode
0x6D
Tested by
t6130
IiyVSUBRPD:: PROC
    IiEmitOpcode 0x6D
    JMP IiyVADDNPD.op:
  ENDP IiyVSUBRPD::
↑ VPADDSETSD
Add Int32 Vectors and Set Mask to Sign
Intel reference
VPADDSETSD zmm1 {k1}, zmm2, Si32(zmm3/mt) MVEX.NDS.512.66.0F38.W0 CD /r
Opcode
0xCD
Tested by
t6122
IiyVPADDSETSD:: PROC
    IiAllowMaskMerging
    IiEmitOpcode 0xCD
    IiOpEn RVM
    IiModRM /r
    IiDisp8MVEX Si32
    IiDispatchFormat  zmm.zmm.zmm, zmm.zmm.mem
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix MVEX.NDS.512.66.0F38.W0
    RET
  ENDP IiyVPADDSETSD::
↑ VGMAXABSPS
Absolute Maximum of Float32 Vectors
Intel reference
VGMAXABSPS zmm1 {k1}, zmm2, Sf32(zmm3/mt) MVEX.NDS.512.66.0F38.W0 51 /r
Opcode
0x51
Tested by
t6126
IiyVGMAXABSPS:: PROC
    IiEmitOpcode 0x51
.op:IiAllowMaskMerging
    IiAllowSuppressing
    IiOpEn RVM
    IiModRM /r
    IiDisp8MVEX Us32
    IiDispatchFormat  zmm.zmm.zmm, zmm.zmm.mem
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix MVEX.NDS.512.66.0F38.W0
    RET
  ENDP IiyVGMAXABSPS::
↑ VGMINPS
Minimum of Float32 Vectors
Intel reference
VGMINPS zmm1 {k1}, zmm2, Sf32(zmm3/mt) MVEX.NDS.512.66.0F38.W0 52 /r
Opcode
0x52
Tested by
t6126
IiyVGMINPS:: PROC
    IiEmitOpcode 0x52
    JMP IiyVGMAXABSPS.op:
  ENDP IiyVGMINPS::
↑ VGMAXPS
Maximum of Float32 Vectors
Intel reference
VGMAXPS zmm1 {k1}, zmm2, Sf32(zmm3/mt) MVEX.NDS.512.66.0F38.W0 53 /r
Opcode
0x53
Tested by
t6126
IiyVGMAXPS:: PROC
    IiEmitOpcode 0x53
    JMP IiyVGMAXABSPS.op:
  ENDP IiyVGMAXPS::
↑ VSUBRPS
Reverse Subtract Float32 Vectors
Intel reference
VSUBRPS zmm1 {k1}, zmm2, Sf32(zmm3/mt) MVEX.NDS.512.66.0F38.W0 6D /r
Opcode
0x6D
Tested by
t6124
IiyVSUBRPS:: PROC
    IiEmitOpcode 0x6D
    IiDisp8MVEX Us32
    JMP IiyVSCALEPS.op:
  ENDP IiyVSUBRPS::
↑ VADDSETSPS
Add Float32 Vectors and Set Mask to Sign
Intel reference
VADDSETSPS zmm1 {k1}, zmm2, S (zmm3/mt) MVEX.NDS.512.66.0F38.W0 CC /r
Opcode
0xCC
Tested by
t6124
IiyVADDSETSPS:: PROC
    IiEmitOpcode 0xCC
    IiDisp8MVEX Us32
    JMP IiyVSCALEPS.op:
  ENDP IiyVADDSETSPS::
↑ VGMINPD
Minimum of Float64 Vectors
Intel reference
VGMINPD zmm1 {k1}, zmm2, Sf64(zmm3/mt) MVEX.NDS.512.66.0F38.W1 52 /r
Opcode
0x52
Tested by
t6132
IiyVGMINPD:: PROC
    IiEmitOpcode 0x52
.op:IiAllowMaskMerging
    IiAllowSuppressing
    IiOpEn RVM
    IiModRM /r
    IiDisp8MVEX Ub64
    IiDispatchFormat  zmm.zmm.zmm, zmm.zmm.mem
.zmm.zmm.zmm:
.zmm.zmm.mem:
    IiEmitPrefix MVEX.NDS.512.66.0F38.W1
    RET
  ENDP IiyVGMINPD::
↑ VGMAXPD
Maximum of Float64 Vectors
Intel reference
VGMAXPD zmm1 {k1}, zmm2, Sf64(zmm3/mt) MVEX.NDS.512.66.0F38.W1 53 /r
Opcode
0x53
Tested by
t6132
IiyVGMAXPD:: PROC
    IiEmitOpcode 0x53
    JMP IiyVGMINPD.op:
  ENDP IiyVGMAXPD::
↑ VFIXUPNANPD
FixUp Special Float64 Vector Numbers With NaN Passthrough
Intel reference
VFIXUPNANPD zmm1 {k1}, zmm2, Si64(zmm3/mt) MVEX.NDS.512.66.0F38.W1 55 /r
Opcode
0x55
Tested by
t6132
IiyVFIXUPNANPD:: PROC
    IiEmitOpcode 0x55
    JMP IiyVGMINPD.op:
  ENDP IiyVFIXUPNANPD::
↑ VLOG2PS
Vector Logarithm Base-2 of Float32 Vector
Intel reference
VLOG2PS zmm1 {k1}, zmm2/mt MVEX.512.66.0F38.W0 C9 /r
Opcode
0xC9
Tested by
t6134
IiyVLOG2PS:: PROC
    IiEmitOpcode 0xC9
.op:IiAllowMaskMerging
    IiAllowSuppressing Swizzle=No
    IiOpEn RM
    IiModRM /r
    IiDisp8MVEX Di64
    IiDispatchFormat  zmm.zmm, zmm.mem
.zmm.zmm:
.zmm.mem:
    IiEmitPrefix MVEX.512.66.0F38.W0
    RET
  ENDP IiyVLOG2PS::
↑ VEXP223PS
Base-2 Exponential Calculation of Float32 Vector
Intel reference
VEXP223PS zmm1 {k1}, zmm2/mt MVEX.512.66.0F38.W0 C8 /r
Opcode
0xC8
Tested by
t6134
IiyVEXP223PS:: PROC
    IiEmitOpcode 0xC8
    JMP IiyVLOG2PS.op:
  ENDP IiyVEXP223PS::
↑ VRCP23PS
Reciprocal of Float32 Vector
Intel reference
VRCP23PS zmm1 {k1}, zmm2/mt MVEX.512.66.0F38.W0 CA /r
Opcode
0xCA
Tested by
t6134
IiyVRCP23PS:: PROC
    IiEmitOpcode 0xCA
    JMP IiyVLOG2PS.op:
  ENDP IiyVRCP23PS::
↑ VRSQRT23PS
Vector Reciprocal Square Root of Float32 Vector
Intel reference
VRSQRT23PS zmm1 {k1}, zmm2/mt MVEX.512.66.0F38.W0 CB /r
Opcode
0xCB
Tested by
t6134
IiyVRSQRT23PS:: PROC
    IiEmitOpcode 0xCB
    JMP IiyVLOG2PS.op:
  ENDP IiyVRSQRT23PS::
↑ VPLZCNTD
Count the Number of Leading Zero Bits for Packed Dword Values
Intel reference
VPLZCNTD xmm1 {k1}{z}, xmm2/m128/m32bcst EVEX.128.66.0F38.W0 44 /r
VPLZCNTD ymm1 {k1}{z}, ymm2/m256/m32bcst EVEX.256.66.0F38.W0 44 /r
VPLZCNTD zmm1 {k1}{z}, zmm2/m512/m32bcst EVEX.512.66.0F38.W0 44 /r
Opcode
0x44
Tested by
t5726
IiyVPLZCNTD:: PROC
    IiAllowModifier MASK
    IiAllowBroadcasting DWORD
    IiEmitOpcode 0x44
    IiOpEn RM
    IiModRM /r
    IiDisp8EVEX FV32
    IiDispatchFormat  xmm.xmm, xmm.mem, ymm.ymm, ymm.mem, zmm.zmm, zmm.mem
.xmm.xmm:
.xmm.mem:
    IiEmitPrefix EVEX.128.66.0F38.W0
    RET
.ymm.ymm:
.ymm.mem:
    IiEmitPrefix EVEX.256.66.0F38.W0
    RET
.zmm.zmm:
.zmm.mem:
    IiEmitPrefix EVEX.512.66.0F38.W0
    RET
  ENDP IiyVPLZCNTD::
↑ VPLZCNTQ
Count the Number of Leading Zero Bits for Packed Qword Values
Intel reference
VPLZCNTQ xmm1 {k1}{z}, xmm2/m128/m64bcst EVEX.128.66.0F38.W1 44 /r
VPLZCNTQ ymm1 {k1}{z}, ymm2/m256/m64bcst EVEX.256.66.0F38.W1 44 /r
VPLZCNTQ zmm1 {k1}{z}, zmm2/m512/m64bcst EVEX.512.66.0F38.W1 44 /r
Opcode
0x44
Tested by
t5726
IiyVPLZCNTQ:: PROC
    IiAllowModifier MASK
    IiAllowBroadcasting QWORD
    IiEmitOpcode 0x44
    IiOpEn RM
    IiModRM /r
    IiDisp8EVEX FV64
    IiDispatchFormat  xmm.xmm, xmm.mem, ymm.ymm, ymm.mem, zmm.zmm, zmm.mem
.xmm.xmm:
.xmm.mem:
    IiEmitPrefix EVEX.128.66.0F38.W1
    RET
.ymm.ymm:
.ymm.mem:
    IiEmitPrefix EVEX.256.66.0F38.W1
    RET
.zmm.zmm:
.zmm.mem:
    IiEmitPrefix EVEX.512.66.0F38.W1
    RET
  ENDP IiyVPLZCNTQ::
↑ VPREFETCHNTA
Prefetch memory line using NTA hint
Intel reference
VPREFETCHNTA m8 VEX.128.0F 18 /0
VPREFETCHNTA m8 MVEX.512.0F 18 /0
Category
sse1,fetch
Operands
Mb
Opcode
0x0F18 /0
CPU
P3+
Tested by
t6242
IiyVPREFETCHNTA:: PROC
    MOV EAX,iiPpgModRMd + 0<<28
.rm:IiModRM EAX    
    IiRequire AVX512, MVEX
    IiEmitOpcode 0x18
    IiOpEn M
    IiDisp8MVEX Di64
    IiDispatchFormat  mem
.mem:
    IiEmitPrefix VEX.128.0F, MVEX.512.0F
    RET
  ENDP IiyVPREFETCHNTA::
↑ VPREFETCH0
Prefetch memory line using T0 hint
Intel reference
VPREFETCH0 m8 VEX.128.0F 18 /1
VPREFETCH0 m8 MVEX.512.0F 18 /1
Opcode
0x18
Tested by
t6240
IiyVPREFETCH0:: PROC
    MOV EAX,iiPpgModRMd + 1<<28
    JMP IiyVPREFETCHNTA.rm:
  ENDP IiyVPREFETCH0::
↑ VPREFETCH1
Prefetch memory line using T1 hint
Intel reference
VPREFETCH1 m8 VEX.128.0F 18 /2
VPREFETCH1 m8 MVEX.512.0F 18 /2
Opcode
0x18
Tested by
t6240
IiyVPREFETCH1:: PROC
    MOV EAX,iiPpgModRMd + 2<<28
    JMP IiyVPREFETCHNTA.rm:
  ENDP IiyVPREFETCH1::
↑ VPREFETCH2
Prefetch memory line using T2 hint
Intel reference
VPREFETCH2 m8 VEX.128.0F 18 /3
VPREFETCH2 m8 MVEX.512.0F 18 /3
Opcode
0x18
Tested by
t6240
IiyVPREFETCH2:: PROC
    MOV EAX,iiPpgModRMd + 3<<28
    JMP IiyVPREFETCHNTA.rm:
  ENDP IiyVPREFETCH2::
↑ VPREFETCHENTA
Prefetch memory line using NTA hint, with intent to write
Intel reference
VPREFETCHENTA m8 VEX.128.0F 18 /4
VPREFETCHENTA m8 MVEX.512.0F 18 /4
Opcode
0x18
Tested by
t6242
IiyVPREFETCHENTA:: PROC
    MOV EAX,iiPpgModRMd + 4<<28
    JMP IiyVPREFETCHNTA.rm:
  ENDP IiyVPREFETCHENTA::
↑ VPREFETCHE0
Prefetch memory line using T0 hint, with intent to write
Intel reference
VPREFETCHE0 m8 VEX.128.0F 18 /5
VPREFETCHE0 m8 MVEX.512.0F 18 /5
Opcode
0x18
Tested by
t6240
IiyVPREFETCHE0:: PROC
    MOV EAX,iiPpgModRMd + 5<<28
    JMP IiyVPREFETCHNTA.rm:
  ENDP IiyVPREFETCHE0::
↑ VPREFETCHE1
Prefetch memory line using T1 hint, with intent to write
Intel reference
VPREFETCHE1 m8 VEX.128.0F 18 /6
VPREFETCHE1 m8 MVEX.512.0F 18 /6
Opcode
0x18
Tested by
t6240
IiyVPREFETCHE1:: PROC
    MOV EAX,iiPpgModRMd + 6<<28
    JMP IiyVPREFETCHNTA.rm:
  ENDP IiyVPREFETCHE1::
↑ VPREFETCHE2
Prefetch memory line using T2 hint, with intent to write
Intel reference
VPREFETCHE2 m8 VEX.128.0F 18 /7
VPREFETCHE2 m8 MVEX.512.0F 18 /7
Opcode
0x18
Tested by
t6240
IiyVPREFETCHE2:: PROC
    MOV EAX,iiPpgModRMd + 7<<28
    JMP IiyVPREFETCHNTA.rm:
  ENDP IiyVPREFETCHE2::
↑ CLEVICT0
Evict L1 line
Intel reference
CLEVICT0 m8 VEX.128.F2.0F AE /7
CLEVICT0 m8 MVEX.512.F2.0F AE /7
Opcode
0xAE
Tested by
t6242
IiyCLEVICT0:: PROC
    IiEmitPrefix VEX.128.F2.0F, MVEX.512.F2.0F
.pf:IiOpEn M
    IiModRM /7
    IiEmitOpcode 0xAE
    IiDisp8MVEX Di64
    IiDispatchFormat  mem
.mem:RET
  ENDP IiyCLEVICT0::
↑ CLEVICT1
Evict L2 line
Intel reference
CLEVICT1 m8 VEX.128.F3.0F AE /7
CLEVICT1 m8 MVEX.512.F3.0F AE /7
Opcode
0xAE
Tested by
t6242
IiyCLEVICT1:: PROC
    IiEmitPrefix VEX.128.F3.0F, MVEX.512.F3.0F
    JMP IiyCLEVICT0.pf:
  ENDP IiyCLEVICT1::
↑ DELAY
Stall Thread
Intel reference
DELAY r32 VEX.128.F3.0F.W0 AE /6
DELAY r64VEX.128.F3.0F.W1 AE /6
Opcode
0xAE
IiyDELAY:: PROC
     IiRequire AVX512, MVEX
     IiOpEn M
     IiModRM /6
     IiEmitOpcode 0xAE
     IiDispatchFormat  r32, r64
.r32:IiEmitPrefix VEX.128.F3.0F.W0
     RET
.r64:IiEmitPrefix VEX.128.F3.0F.W1
     RET
  ENDP IiyDELAY::
↑ SPFLT
Set performance monitor filtering mask
Intel reference
SPFLT r32 VEX.128.F2.0F.W0 AE /6
SPFLT r64VEX.128.F2.0F.W1 AE /6
Opcode
0xAE
IiySPFLT:: PROC
     IiRequire AVX512, MVEX
     IiOpEn M
     IiEmitOpcode 0xAE
     IiModRM /6
     IiDispatchFormat  r32, r64
.r32:IiEmitPrefix VEX.128.F2.0F.W0
     RET
.r64:IiEmitPrefix VEX.128.F2.0F.W1
     RET
  ENDP IiySPFLT::
↑ TZCNTI
Initialized Trailing Zero Count
Intel reference
TZCNTI r32, r32 VEX.128.F2.0F.W0 BC /r
TZCNTI r64, r64 VEX.128.F2.0F.W1 BC /r
Opcode
0xBC
Tested by
t6246
IiyTZCNTI:: PROC
   IiRequire AVX512, MVEX
    IiOpEn RM
    IiEmitOpcode 0xBC
    IiModRM /r
    IiDispatchFormat  r32.r32, r64.r64
.r32.r32:
    IiEmitPrefix VEX.128.F2.0F.W0
    RET
.r64.r64:
    IiEmitPrefix VEX.128.F2.0F.W1
    RET
  ENDP IiyTZCNTI::
  ENDPROGRAM iiy

▲Back to the top▲