blob: 50d3f487228e321ba4443359a7600d0fb0b828f9 [file] [log] [blame]
.. |copyright-date| replace:: 2019-2021
.. |release| replace:: 2021Q2
.. |date-of-issue| replace:: 02 July 2021
.. |footer| replace:: Copyright © |copyright-date|, Arm Limited and its
affiliates. All rights reserved.
==================
Arm MVE Intrinsics
==================
.. class:: logo
.. image:: Arm_logo_blue_RGB.svg
:scale: 30%
.. class:: version
|release|
.. class:: issued
Date of Issue: |date-of-issue|
.. section-numbering::
.. raw:: pdf
PageBreak oneColumn
.. contents:: Table of Contents
:depth: 4
Preface
#######
Abstract
========
This document is complementary to the main Arm C Language Extensions
(ACLE) specification, which can be found on the `ACLE project on
GitHub <https://github.com/ARM-software/acle>`_.
Latest release and defects report
=================================
For the latest release of this document, see the `ACLE project on
GitHub <https://github.com/ARM-software/acle>`_.
Please report defects in this specification to the `issue tracker page
on GitHub <https://github.com/ARM-software/acle/issues>`_.
License
=======
This work is licensed under the Creative Commons
Attribution-ShareAlike 4.0 International License. To view a copy of
this license, visit http://creativecommons.org/licenses/by-sa/4.0/ or
send a letter to Creative Commons, PO Box 1866, Mountain View, CA
94042, USA.
Grant of Patent License. Subject to the terms and conditions of this
license (both the Public License and this Patent License), each
Licensor hereby grants to You a perpetual, worldwide, non-exclusive,
no-charge, royalty-free, irrevocable (except as stated in this
section) patent license to make, have made, use, offer to sell, sell,
import, and otherwise transfer the Licensed Material, where such
license applies only to those patent claims licensable by such
Licensor that are necessarily infringed by their contribution(s) alone
or by combination of their contribution(s) with the Licensed Material
to which such contribution(s) was submitted. If You institute patent
litigation against any entity (including a cross-claim or counterclaim
in a lawsuit) alleging that the Licensed Material or a contribution
incorporated within the Licensed Material constitutes direct or
contributory patent infringement, then any licenses granted to You
under this license for that Licensed Material shall terminate as of
the date such litigation is filed.
About the license
=================
As identified more fully in the License_ section, this project
is licensed under CC-BY-SA-4.0 along with an additional patent
license. The language in the additional patent license is largely
identical to that in Apache-2.0 (specifically, Section 3 of Apache-2.0
as reflected at https://www.apache.org/licenses/LICENSE-2.0) with two
exceptions.
First, several changes were made related to the defined terms so as to
reflect the fact that such defined terms need to align with the
terminology in CC-BY-SA-4.0 rather than Apache-2.0 (e.g., changing
“Work” to “Licensed Material”).
Second, the defensive termination clause was changed such that the
scope of defensive termination applies to “any licenses granted to
You” (rather than “any patent licenses granted to You”). This change
is intended to help maintain a healthy ecosystem by providing
additional protection to the community against patent litigation
claims.
Contributions
=============
Contributions to this project are licensed under an inbound=outbound
model such that any such contributions are licensed by the contributor
under the same terms as those in the LICENSE file.
Trademark notice
================
The text of and illustrations in this document are licensed by Arm
under a Creative Commons Attribution–Share Alike 4.0 International
license ("CC-BY-SA-4.0”), with an additional clause on patents.
The Arm trademarks featured here are registered trademarks or
trademarks of Arm Limited (or its subsidiaries) in the US and/or
elsewhere. All rights reserved. Please visit
https://www.arm.com/company/policies/trademarks for more information
about Arm’s trademarks.
Copyright
=========
Copyright (c) |copyright-date|, Arm Limited and its affiliates. All rights
reserved.
Document history
================
+-----------+-----------------+---------------------+
|Issue |Date |Change |
+-----------+-----------------+---------------------+
|Q219-00 |30 June 2019 |Version ACLE Q2 2019 |
+-----------+-----------------+---------------------+
|Q319-00 |30 September 2019|Version ACLE Q3 2019 |
+-----------+-----------------+---------------------+
|Q419-00 |31 December 2019 |Version ACLE Q4 2019 |
+-----------+-----------------+---------------------+
|Q220-00 |30 May 2020 |Version ACLE Q2 2020 |
+-----------+-----------------+---------------------+
| |release| | |date-of-issue| |Open source release. |
+-----------+-----------------+---------------------+
List of Intrinsics
##################
Vector manipulation
===================
Create vector
~~~~~~~~~~~~~
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+==================================================+========================+=================================+===================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcreateq_f16( | a -> [Rt0, Rt1] | VMOV Qd[0], Rt0 | Qd -> result | |
| uint64_t a, | b -> [Rt2, Rt3] | VMOV Qd[1], Rt1 | | |
| uint64_t b) | | VMOV Qd[2], Rt2 | | |
| | | VMOV Qd[3], Rt3 | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcreateq_f32( | a -> [Rt0, Rt1] | VMOV Qd[0], Rt0 | Qd -> result | |
| uint64_t a, | b -> [Rt2, Rt3] | VMOV Qd[1], Rt1 | | |
| uint64_t b) | | VMOV Qd[2], Rt2 | | |
| | | VMOV Qd[3], Rt3 | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vcreateq_s8( | a -> [Rt0, Rt1] | VMOV Qd[0], Rt0 | Qd -> result | |
| uint64_t a, | b -> [Rt2, Rt3] | VMOV Qd[1], Rt1 | | |
| uint64_t b) | | VMOV Qd[2], Rt2 | | |
| | | VMOV Qd[3], Rt3 | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vcreateq_s16( | a -> [Rt0, Rt1] | VMOV Qd[0], Rt0 | Qd -> result | |
| uint64_t a, | b -> [Rt2, Rt3] | VMOV Qd[1], Rt1 | | |
| uint64_t b) | | VMOV Qd[2], Rt2 | | |
| | | VMOV Qd[3], Rt3 | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vcreateq_s32( | a -> [Rt0, Rt1] | VMOV Qd[0], Rt0 | Qd -> result | |
| uint64_t a, | b -> [Rt2, Rt3] | VMOV Qd[1], Rt1 | | |
| uint64_t b) | | VMOV Qd[2], Rt2 | | |
| | | VMOV Qd[3], Rt3 | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vcreateq_s64( | a -> [Rt0, Rt1] | VMOV Qd[0], Rt0 | Qd -> result | |
| uint64_t a, | b -> [Rt2, Rt3] | VMOV Qd[1], Rt1 | | |
| uint64_t b) | | VMOV Qd[2], Rt2 | | |
| | | VMOV Qd[3], Rt3 | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vcreateq_u8( | a -> [Rt0, Rt1] | VMOV Qd[0], Rt0 | Qd -> result | |
| uint64_t a, | b -> [Rt2, Rt3] | VMOV Qd[1], Rt1 | | |
| uint64_t b) | | VMOV Qd[2], Rt2 | | |
| | | VMOV Qd[3], Rt3 | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vcreateq_u16( | a -> [Rt0, Rt1] | VMOV Qd[0], Rt0 | Qd -> result | |
| uint64_t a, | b -> [Rt2, Rt3] | VMOV Qd[1], Rt1 | | |
| uint64_t b) | | VMOV Qd[2], Rt2 | | |
| | | VMOV Qd[3], Rt3 | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vcreateq_u32( | a -> [Rt0, Rt1] | VMOV Qd[0], Rt0 | Qd -> result | |
| uint64_t a, | b -> [Rt2, Rt3] | VMOV Qd[1], Rt1 | | |
| uint64_t b) | | VMOV Qd[2], Rt2 | | |
| | | VMOV Qd[3], Rt3 | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64x2_t [__arm_]vcreateq_u64( | a -> [Rt0, Rt1] | VMOV Qd[0], Rt0 | Qd -> result | |
| uint64_t a, | b -> [Rt2, Rt3] | VMOV Qd[1], Rt1 | | |
| uint64_t b) | | VMOV Qd[2], Rt2 | | |
| | | VMOV Qd[3], Rt3 | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vddupq[_n]_u8( | a -> Rn | VDDUP.U8 Qd, Rn, imm | Qd -> result | |
| uint32_t a, | imm in [1,2,4,8] | | | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vddupq[_n]_u16( | a -> Rn | VDDUP.U16 Qd, Rn, imm | Qd -> result | |
| uint32_t a, | imm in [1,2,4,8] | | | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vddupq[_n]_u32( | a -> Rn | VDDUP.U32 Qd, Rn, imm | Qd -> result | |
| uint32_t a, | imm in [1,2,4,8] | | | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vddupq[_wb]_u8( | *a -> Rn | VDDUP.U8 Qd, Rn, imm | Qd -> result | |
| uint32_t *a, | imm in [1,2,4,8] | | Rn -> *a | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vddupq[_wb]_u16( | *a -> Rn | VDDUP.U16 Qd, Rn, imm | Qd -> result | |
| uint32_t *a, | imm in [1,2,4,8] | | Rn -> *a | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vddupq[_wb]_u32( | *a -> Rn | VDDUP.U32 Qd, Rn, imm | Qd -> result | |
| uint32_t *a, | imm in [1,2,4,8] | | Rn -> *a | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vddupq_m[_n_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Rn | VPST | | |
| uint32_t a, | imm in [1,2,4,8] | VDDUPT.U8 Qd, Rn, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vddupq_m[_n_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Rn | VPST | | |
| uint32_t a, | imm in [1,2,4,8] | VDDUPT.U16 Qd, Rn, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vddupq_m[_n_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Rn | VPST | | |
| uint32_t a, | imm in [1,2,4,8] | VDDUPT.U32 Qd, Rn, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vddupq_m[_wb_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | *a -> Rn | VPST | Rn -> *a | |
| uint32_t *a, | imm in [1,2,4,8] | VDDUPT.U8 Qd, Rn, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vddupq_m[_wb_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | *a -> Rn | VPST | Rn -> *a | |
| uint32_t *a, | imm in [1,2,4,8] | VDDUPT.U16 Qd, Rn, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vddupq_m[_wb_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | *a -> Rn | VPST | Rn -> *a | |
| uint32_t *a, | imm in [1,2,4,8] | VDDUPT.U32 Qd, Rn, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vddupq_x[_n]_u8( | a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t a, | imm in [1,2,4,8] | VPST | | |
| const int imm, | p -> Rp | VDDUPT.U8 Qd, Rn, imm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vddupq_x[_n]_u16( | a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t a, | imm in [1,2,4,8] | VPST | | |
| const int imm, | p -> Rp | VDDUPT.U16 Qd, Rn, imm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vddupq_x[_n]_u32( | a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t a, | imm in [1,2,4,8] | VPST | | |
| const int imm, | p -> Rp | VDDUPT.U32 Qd, Rn, imm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vddupq_x[_wb]_u8( | *a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t *a, | imm in [1,2,4,8] | VPST | Rn -> *a | |
| const int imm, | p -> Rp | VDDUPT.U8 Qd, Rn, imm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vddupq_x[_wb]_u16( | *a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t *a, | imm in [1,2,4,8] | VPST | Rn -> *a | |
| const int imm, | p -> Rp | VDDUPT.U16 Qd, Rn, imm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vddupq_x[_wb]_u32( | *a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t *a, | imm in [1,2,4,8] | VPST | Rn -> *a | |
| const int imm, | p -> Rp | VDDUPT.U32 Qd, Rn, imm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vdwdupq[_n]_u8( | a -> Rn | VDWDUP.U8 Qd, Rn, Rm, imm | Qd -> result | |
| uint32_t a, | b -> Rm | | | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vdwdupq[_n]_u16( | a -> Rn | VDWDUP.U16 Qd, Rn, Rm, imm | Qd -> result | |
| uint32_t a, | b -> Rm | | | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vdwdupq[_n]_u32( | a -> Rn | VDWDUP.U32 Qd, Rn, Rm, imm | Qd -> result | |
| uint32_t a, | b -> Rm | | | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vdwdupq[_wb]_u8( | *a -> Rn | VDWDUP.U8 Qd, Rn, Rm, imm | Qd -> result | |
| uint32_t *a, | b -> Rm | | Rn -> *a | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vdwdupq[_wb]_u16( | *a -> Rn | VDWDUP.U16 Qd, Rn, Rm, imm | Qd -> result | |
| uint32_t *a, | b -> Rm | | Rn -> *a | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vdwdupq[_wb]_u32( | *a -> Rn | VDWDUP.U32 Qd, Rn, Rm, imm | Qd -> result | |
| uint32_t *a, | b -> Rm | | Rn -> *a | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vdwdupq_m[_n_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Rn | VPST | | |
| uint32_t a, | b -> Rm | VDWDUPT.U8 Qd, Rn, Rm, imm | | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vdwdupq_m[_n_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Rn | VPST | | |
| uint32_t a, | b -> Rm | VDWDUPT.U16 Qd, Rn, Rm, imm | | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vdwdupq_m[_n_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Rn | VPST | | |
| uint32_t a, | b -> Rm | VDWDUPT.U32 Qd, Rn, Rm, imm | | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vdwdupq_m[_wb_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | *a -> Rn | VPST | Rn -> *a | |
| uint32_t *a, | b -> Rm | VDWDUPT.U8 Qd, Rn, Rm, imm | | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vdwdupq_m[_wb_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | *a -> Rn | VPST | Rn -> *a | |
| uint32_t *a, | b -> Rm | VDWDUPT.U16 Qd, Rn, Rm, imm | | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vdwdupq_m[_wb_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | *a -> Rn | VPST | Rn -> *a | |
| uint32_t *a, | b -> Rm | VDWDUPT.U32 Qd, Rn, Rm, imm | | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vdwdupq_x[_n]_u8( | a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t a, | b -> Rm | VPST | | |
| uint32_t b, | imm in [1,2,4,8] | VDWDUPT.U8 Qd, Rn, Rm, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vdwdupq_x[_n]_u16( | a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t a, | b -> Rm | VPST | | |
| uint32_t b, | imm in [1,2,4,8] | VDWDUPT.U16 Qd, Rn, Rm, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vdwdupq_x[_n]_u32( | a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t a, | b -> Rm | VPST | | |
| uint32_t b, | imm in [1,2,4,8] | VDWDUPT.U32 Qd, Rn, Rm, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vdwdupq_x[_wb]_u8( | *a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t *a, | b -> Rm | VPST | Rn -> *a | |
| uint32_t b, | imm in [1,2,4,8] | VDWDUPT.U8 Qd, Rn, Rm, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vdwdupq_x[_wb]_u16( | *a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t *a, | b -> Rm | VPST | Rn -> *a | |
| uint32_t b, | imm in [1,2,4,8] | VDWDUPT.U16 Qd, Rn, Rm, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vdwdupq_x[_wb]_u32( | *a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t *a, | b -> Rm | VPST | Rn -> *a | |
| uint32_t b, | imm in [1,2,4,8] | VDWDUPT.U32 Qd, Rn, Rm, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vidupq[_n]_u8( | a -> Rn | VIDUP.U8 Qd, Rn, imm | Qd -> result | |
| uint32_t a, | imm in [1,2,4,8] | | | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vidupq[_n]_u16( | a -> Rn | VIDUP.U16 Qd, Rn, imm | Qd -> result | |
| uint32_t a, | imm in [1,2,4,8] | | | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vidupq[_n]_u32( | a -> Rn | VIDUP.U32 Qd, Rn, imm | Qd -> result | |
| uint32_t a, | imm in [1,2,4,8] | | | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vidupq[_wb]_u8( | *a -> Rn | VIDUP.U8 Qd, Rn, imm | Qd -> result | |
| uint32_t *a, | imm in [1,2,4,8] | | Rn -> *a | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vidupq[_wb]_u16( | *a -> Rn | VIDUP.U16 Qd, Rn, imm | Qd -> result | |
| uint32_t *a, | imm in [1,2,4,8] | | Rn -> *a | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vidupq[_wb]_u32( | *a -> Rn | VIDUP.U32 Qd, Rn, imm | Qd -> result | |
| uint32_t *a, | imm in [1,2,4,8] | | Rn -> *a | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vidupq_m[_n_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Rn | VPST | | |
| uint32_t a, | imm in [1,2,4,8] | VIDUPT.U8 Qd, Rn, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vidupq_m[_n_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Rn | VPST | | |
| uint32_t a, | imm in [1,2,4,8] | VIDUPT.U16 Qd, Rn, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vidupq_m[_n_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Rn | VPST | | |
| uint32_t a, | imm in [1,2,4,8] | VIDUPT.U32 Qd, Rn, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vidupq_m[_wb_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | *a -> Rn | VPST | Rn -> *a | |
| uint32_t *a, | imm in [1,2,4,8] | VIDUPT.U8 Qd, Rn, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vidupq_m[_wb_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | *a -> Rn | VPST | Rn -> *a | |
| uint32_t *a, | imm in [1,2,4,8] | VIDUPT.U16 Qd, Rn, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vidupq_m[_wb_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | *a -> Rn | VPST | Rn -> *a | |
| uint32_t *a, | imm in [1,2,4,8] | VIDUPT.U32 Qd, Rn, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vidupq_x[_n]_u8( | a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t a, | imm in [1,2,4,8] | VPST | | |
| const int imm, | p -> Rp | VIDUPT.U8 Qd, Rn, imm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vidupq_x[_n]_u16( | a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t a, | imm in [1,2,4,8] | VPST | | |
| const int imm, | p -> Rp | VIDUPT.U16 Qd, Rn, imm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vidupq_x[_n]_u32( | a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t a, | imm in [1,2,4,8] | VPST | | |
| const int imm, | p -> Rp | VIDUPT.U32 Qd, Rn, imm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vidupq_x[_wb]_u8( | *a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t *a, | imm in [1,2,4,8] | VPST | Rn -> *a | |
| const int imm, | p -> Rp | VIDUPT.U8 Qd, Rn, imm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vidupq_x[_wb]_u16( | *a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t *a, | imm in [1,2,4,8] | VPST | Rn -> *a | |
| const int imm, | p -> Rp | VIDUPT.U16 Qd, Rn, imm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vidupq_x[_wb]_u32( | *a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t *a, | imm in [1,2,4,8] | VPST | Rn -> *a | |
| const int imm, | p -> Rp | VIDUPT.U32 Qd, Rn, imm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]viwdupq[_n]_u8( | a -> Rn | VIWDUP.U8 Qd, Rn, Rm, imm | Qd -> result | |
| uint32_t a, | b -> Rm | | | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]viwdupq[_n]_u16( | a -> Rn | VIWDUP.U16 Qd, Rn, Rm, imm | Qd -> result | |
| uint32_t a, | b -> Rm | | | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]viwdupq[_n]_u32( | a -> Rn | VIWDUP.U32 Qd, Rn, Rm, imm | Qd -> result | |
| uint32_t a, | b -> Rm | | | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]viwdupq[_wb]_u8( | *a -> Rn | VIWDUP.U8 Qd, Rn, Rm, imm | Qd -> result | |
| uint32_t *a, | b -> Rm | | Rn -> *a | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]viwdupq[_wb]_u16( | *a -> Rn | VIWDUP.U16 Qd, Rn, Rm, imm | Qd -> result | |
| uint32_t *a, | b -> Rm | | Rn -> *a | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]viwdupq[_wb]_u32( | *a -> Rn | VIWDUP.U32 Qd, Rn, Rm, imm | Qd -> result | |
| uint32_t *a, | b -> Rm | | Rn -> *a | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]viwdupq_m[_n_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Rn | VPST | | |
| uint32_t a, | b -> Rm | VIWDUPT.U8 Qd, Rn, Rm, imm | | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]viwdupq_m[_n_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Rn | VPST | | |
| uint32_t a, | b -> Rm | VIWDUPT.U16 Qd, Rn, Rm, imm | | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]viwdupq_m[_n_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Rn | VPST | | |
| uint32_t a, | b -> Rm | VIWDUPT.U32 Qd, Rn, Rm, imm | | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]viwdupq_m[_wb_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | *a -> Rn | VPST | Rn -> *a | |
| uint32_t *a, | b -> Rm | VIWDUPT.U8 Qd, Rn, Rm, imm | | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]viwdupq_m[_wb_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | *a -> Rn | VPST | Rn -> *a | |
| uint32_t *a, | b -> Rm | VIWDUPT.U16 Qd, Rn, Rm, imm | | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]viwdupq_m[_wb_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | *a -> Rn | VPST | Rn -> *a | |
| uint32_t *a, | b -> Rm | VIWDUPT.U32 Qd, Rn, Rm, imm | | |
| uint32_t b, | imm in [1,2,4,8] | | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]viwdupq_x[_n]_u8( | a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t a, | b -> Rm | VPST | | |
| uint32_t b, | imm in [1,2,4,8] | VIWDUPT.U8 Qd, Rn, Rm, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]viwdupq_x[_n]_u16( | a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t a, | b -> Rm | VPST | | |
| uint32_t b, | imm in [1,2,4,8] | VIWDUPT.U16 Qd, Rn, Rm, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]viwdupq_x[_n]_u32( | a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t a, | b -> Rm | VPST | | |
| uint32_t b, | imm in [1,2,4,8] | VIWDUPT.U32 Qd, Rn, Rm, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]viwdupq_x[_wb]_u8( | *a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t *a, | b -> Rm | VPST | Rn -> *a | |
| uint32_t b, | imm in [1,2,4,8] | VIWDUPT.U8 Qd, Rn, Rm, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]viwdupq_x[_wb]_u16( | *a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t *a, | b -> Rm | VPST | Rn -> *a | |
| uint32_t b, | imm in [1,2,4,8] | VIWDUPT.U16 Qd, Rn, Rm, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]viwdupq_x[_wb]_u32( | *a -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t *a, | b -> Rm | VPST | Rn -> *a | |
| uint32_t b, | imm in [1,2,4,8] | VIWDUPT.U32 Qd, Rn, Rm, imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vdupq_n_s8(int8_t a) | a -> Rt | VDUP.8 Qd, Rt | Qd -> result | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vdupq_n_s16(int16_t a) | a -> Rt | VDUP.16 Qd, Rt | Qd -> result | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vdupq_n_s32(int32_t a) | a -> Rt | VDUP.32 Qd, Rt | Qd -> result | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vdupq_n_u8(uint8_t a) | a -> Rt | VDUP.8 Qd, Rt | Qd -> result | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vdupq_n_u16(uint16_t a) | a -> Rt | VDUP.16 Qd, Rt | Qd -> result | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vdupq_n_u32(uint32_t a) | a -> Rt | VDUP.32 Qd, Rt | Qd -> result | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vdupq_n_f16(float16_t a) | a -> Rt | VDUP.16 Qd, Rt | Qd -> result | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vdupq_n_f32(float32_t a) | a -> Rt | VDUP.32 Qd, Rt | Qd -> result | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vdupq_m[_n_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Rt | VPST | | |
| int8_t a, | p -> Rp | VDUPT.8 Qd, Rt | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vdupq_m[_n_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Rt | VPST | | |
| int16_t a, | p -> Rp | VDUPT.16 Qd, Rt | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vdupq_m[_n_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Rt | VPST | | |
| int32_t a, | p -> Rp | VDUPT.32 Qd, Rt | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vdupq_m[_n_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Rt | VPST | | |
| uint8_t a, | p -> Rp | VDUPT.8 Qd, Rt | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vdupq_m[_n_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Rt | VPST | | |
| uint16_t a, | p -> Rp | VDUPT.16 Qd, Rt | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vdupq_m[_n_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Rt | VPST | | |
| uint32_t a, | p -> Rp | VDUPT.32 Qd, Rt | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vdupq_m[_n_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Rt | VPST | | |
| float16_t a, | p -> Rp | VDUPT.16 Qd, Rt | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vdupq_m[_n_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Rt | VPST | | |
| float32_t a, | p -> Rp | VDUPT.32 Qd, Rt | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vdupq_x_n_s8( | a -> Rt | VMSR P0, Rp | Qd -> result | |
| int8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VDUPT.8 Qd, Rt | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vdupq_x_n_s16( | a -> Rt | VMSR P0, Rp | Qd -> result | |
| int16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VDUPT.16 Qd, Rt | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vdupq_x_n_s32( | a -> Rt | VMSR P0, Rp | Qd -> result | |
| int32_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VDUPT.32 Qd, Rt | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vdupq_x_n_u8( | a -> Rt | VMSR P0, Rp | Qd -> result | |
| uint8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VDUPT.8 Qd, Rt | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vdupq_x_n_u16( | a -> Rt | VMSR P0, Rp | Qd -> result | |
| uint16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VDUPT.16 Qd, Rt | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vdupq_x_n_u32( | a -> Rt | VMSR P0, Rp | Qd -> result | |
| uint32_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VDUPT.32 Qd, Rt | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vdupq_x_n_f16( | a -> Rt | VMSR P0, Rp | Qd -> result | |
| float16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VDUPT.16 Qd, Rt | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vdupq_x_n_f32( | a -> Rt | VMSR P0, Rp | Qd -> result | |
| float32_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VDUPT.32 Qd, Rt | | |
+--------------------------------------------------+------------------------+---------------------------------+-------------------+---------------------------+
Reverse elements
~~~~~~~~~~~~~~~~
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+======================================================+========================+=======================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vrev16q[_s8](int8x16_t a) | a -> Qm | VREV16.8 Qd, Qm | Qd -> result | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vrev16q[_u8](uint8x16_t a) | a -> Qm | VREV16.8 Qd, Qm | Qd -> result | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrev16q_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | p -> Rp | VREV16T.8 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrev16q_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qm | VPST | | |
| uint8x16_t a, | p -> Rp | VREV16T.8 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrev16q_x[_s8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VREV16T.8 Qd, Qm | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrev16q_x[_u8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VREV16T.8 Qd, Qm | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vrev32q[_s8](int8x16_t a) | a -> Qm | VREV32.8 Qd, Qm | Qd -> result | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vrev32q[_s16](int16x8_t a) | a -> Qm | VREV32.16 Qd, Qm | Qd -> result | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vrev32q[_u8](uint8x16_t a) | a -> Qm | VREV32.8 Qd, Qm | Qd -> result | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vrev32q[_u16](uint16x8_t a) | a -> Qm | VREV32.16 Qd, Qm | Qd -> result | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vrev32q[_f16](float16x8_t a) | a -> Qm | VREV32.16 Qd, Qm | Qd -> result | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrev32q_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | p -> Rp | VREV32T.8 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vrev32q_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | p -> Rp | VREV32T.16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrev32q_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qm | VPST | | |
| uint8x16_t a, | p -> Rp | VREV32T.8 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vrev32q_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| uint16x8_t a, | p -> Rp | VREV32T.16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrev32q_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VREV32T.16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrev32q_x[_s8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VREV32T.8 Qd, Qm | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vrev32q_x[_s16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VREV32T.16 Qd, Qm | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrev32q_x[_u8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VREV32T.8 Qd, Qm | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vrev32q_x[_u16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VREV32T.16 Qd, Qm | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrev32q_x[_f16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VREV32T.16 Qd, Qm | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vrev64q[_s8](int8x16_t a) | a -> Qm | VREV64.8 Qd, Qm | Qd -> result | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vrev64q[_s16](int16x8_t a) | a -> Qm | VREV64.16 Qd, Qm | Qd -> result | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vrev64q[_s32](int32x4_t a) | a -> Qm | VREV64.32 Qd, Qm | Qd -> result | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vrev64q[_u8](uint8x16_t a) | a -> Qm | VREV64.8 Qd, Qm | Qd -> result | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vrev64q[_u16](uint16x8_t a) | a -> Qm | VREV64.16 Qd, Qm | Qd -> result | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vrev64q[_u32](uint32x4_t a) | a -> Qm | VREV64.32 Qd, Qm | Qd -> result | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vrev64q[_f16](float16x8_t a) | a -> Qm | VREV64.16 Qd, Qm | Qd -> result | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vrev64q[_f32](float32x4_t a) | a -> Qm | VREV64.32 Qd, Qm | Qd -> result | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrev64q_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | p -> Rp | VREV64T.8 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vrev64q_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | p -> Rp | VREV64T.16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vrev64q_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| int32x4_t a, | p -> Rp | VREV64T.32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrev64q_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qm | VPST | | |
| uint8x16_t a, | p -> Rp | VREV64T.8 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vrev64q_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| uint16x8_t a, | p -> Rp | VREV64T.16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vrev64q_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| uint32x4_t a, | p -> Rp | VREV64T.32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrev64q_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VREV64T.16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vrev64q_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | p -> Rp | VREV64T.32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrev64q_x[_s8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VREV64T.8 Qd, Qm | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vrev64q_x[_s16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VREV64T.16 Qd, Qm | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vrev64q_x[_s32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VREV64T.32 Qd, Qm | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrev64q_x[_u8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VREV64T.8 Qd, Qm | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vrev64q_x[_u16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VREV64T.16 Qd, Qm | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vrev64q_x[_u32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VREV64T.32 Qd, Qm | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrev64q_x[_f16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VREV64T.16 Qd, Qm | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vrev64q_x[_f32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VREV64T.32 Qd, Qm | | |
+------------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
Extract one element from vector
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+-----------------------------------------+------------------------+-------------------------------+-------------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+=========================================+========================+===============================+=========================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16_t [__arm_]vgetq_lane[_f16]( | a -> Qn | VMOV.U16 Rt, Qn[idx] | Rt -> result | |
| float16x8_t a, | 0 <= idx <= 7 | | | |
| const int idx) | | | | |
+-----------------------------------------+------------------------+-------------------------------+-------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32_t [__arm_]vgetq_lane[_f32]( | a -> Qn | VMOV.32 Rt, Qn[idx] | Rt -> result | |
| float32x4_t a, | 0 <= idx <= 3 | | | |
| const int idx) | | | | |
+-----------------------------------------+------------------------+-------------------------------+-------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8_t [__arm_]vgetq_lane[_s8]( | a -> Qn | VMOV.S8 Rt, Qn[idx] | Rt -> result | |
| int8x16_t a, | 0 <= idx <= 15 | | | |
| const int idx) | | | | |
+-----------------------------------------+------------------------+-------------------------------+-------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16_t [__arm_]vgetq_lane[_s16]( | a -> Qn | VMOV.S16 Rt, Qn[idx] | Rt -> result | |
| int16x8_t a, | 0 <= idx <= 7 | | | |
| const int idx) | | | | |
+-----------------------------------------+------------------------+-------------------------------+-------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32_t [__arm_]vgetq_lane[_s32]( | a -> Qn | VMOV.32 Rt, Qn[idx] | Rt -> result | |
| int32x4_t a, | 0 <= idx <= 3 | | | |
| const int idx) | | | | |
+-----------------------------------------+------------------------+-------------------------------+-------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int64_t [__arm_]vgetq_lane[_s64]( | a -> Qn | VMOV Rt1, Rt2, D(2*n+idx) | [Rt1,Rt2] -> result | |
| int64x2_t a, | 0 <= idx <= 1 | | | |
| const int idx) | | | | |
+-----------------------------------------+------------------------+-------------------------------+-------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8_t [__arm_]vgetq_lane[_u8]( | a -> Qn | VMOV.U8 Rt, Qn[idx] | Rt -> result | |
| uint8x16_t a, | 0 <= idx <= 15 | | | |
| const int idx) | | | | |
+-----------------------------------------+------------------------+-------------------------------+-------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16_t [__arm_]vgetq_lane[_u16]( | a -> Qn | VMOV.U16 Rt, Qn[idx] | Rt -> result | |
| uint16x8_t a, | 0 <= idx <= 7 | | | |
| const int idx) | | | | |
+-----------------------------------------+------------------------+-------------------------------+-------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32_t [__arm_]vgetq_lane[_u32]( | a -> Qn | VMOV.32 Rt, Qn[idx] | Rt -> result | |
| uint32x4_t a, | 0 <= idx <= 3 | | | |
| const int idx) | | | | |
+-----------------------------------------+------------------------+-------------------------------+-------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint64_t [__arm_]vgetq_lane[_u64]( | a -> Qn | VMOV Rt1, Rt2, D(2*n+idx) | [Rt1,Rt2] -> result | |
| uint64x2_t a, | 0 <= idx <= 1 | | | |
| const int idx) | | | | |
+-----------------------------------------+------------------------+-------------------------------+-------------------------+---------------------------+
Set vector lane
~~~~~~~~~~~~~~~
+-------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+===========================================+========================+===============================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vsetq_lane[_f16]( | a -> Rt | VMOV.16 Qd[idx], Rt | Qd -> result | |
| float16_t a, | b -> Qd | | | |
| float16x8_t b, | 0 <= idx <= 7 | | | |
| const int idx) | | | | |
+-------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vsetq_lane[_f32]( | a -> Rt | VMOV.32 Qd[idx], Rt | Qd -> result | |
| float32_t a, | b -> Qd | | | |
| float32x4_t b, | 0 <= idx <= 3 | | | |
| const int idx) | | | | |
+-------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vsetq_lane[_s8]( | a -> Rt | VMOV.8 Qd[idx], Rt | Qd -> result | |
| int8_t a, | b -> Qd | | | |
| int8x16_t b, | 0 <= idx <= 15 | | | |
| const int idx) | | | | |
+-------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vsetq_lane[_s16]( | a -> Rt | VMOV.16 Qd[idx], Rt | Qd -> result | |
| int16_t a, | b -> Qd | | | |
| int16x8_t b, | 0 <= idx <= 7 | | | |
| const int idx) | | | | |
+-------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vsetq_lane[_s32]( | a -> Rt | VMOV.32 Qd[idx], Rt | Qd -> result | |
| int32_t a, | b -> Qd | | | |
| int32x4_t b, | 0 <= idx <= 3 | | | |
| const int idx) | | | | |
+-------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int64x2_t [__arm_]vsetq_lane[_s64]( | a -> [Rt1,Rt2] | VMOV D(2*d+idx), Rt1, Rt2 | Qd -> result | |
| int64_t a, | b -> Qd | | | |
| int64x2_t b, | 0 <= idx <= 1 | | | |
| const int idx) | | | | |
+-------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vsetq_lane[_u8]( | a -> Rt | VMOV.8 Qd[idx], Rt | Qd -> result | |
| uint8_t a, | b -> Qd | | | |
| uint8x16_t b, | 0 <= idx <= 15 | | | |
| const int idx) | | | | |
+-------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vsetq_lane[_u16]( | a -> Rt | VMOV.16 Qd[idx], Rt | Qd -> result | |
| uint16_t a, | b -> Qd | | | |
| uint16x8_t b, | 0 <= idx <= 7 | | | |
| const int idx) | | | | |
+-------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vsetq_lane[_u32]( | a -> Rt | VMOV.32 Qd[idx], Rt | Qd -> result | |
| uint32_t a, | b -> Qd | | | |
| uint32x4_t b, | 0 <= idx <= 3 | | | |
| const int idx) | | | | |
+-------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint64x2_t [__arm_]vsetq_lane[_u64]( | a -> [Rt1,Rt2] | VMOV D(2*d+idx), Rt1, Rt2 | Qd -> result | |
| uint64_t a, | b -> Qd | | | |
| uint64x2_t b, | 0 <= idx <= 1 | | | |
| const int idx) | | | | |
+-------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
Create uninitialized vector
~~~~~~~~~~~~~~~~~~~~~~~~~~~
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+========================================================+==========================+===============+==================+===========================+
| .. code:: c | | | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vuninitializedq_s8(void) | | | Qd -> result | |
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
| .. code:: c | | | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vuninitializedq_s16(void) | | | Qd -> result | |
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
| .. code:: c | | | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vuninitializedq_s32(void) | | | Qd -> result | |
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
| .. code:: c | | | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vuninitializedq_s64(void) | | | Qd -> result | |
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
| .. code:: c | | | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vuninitializedq_u8(void) | | | Qd -> result | |
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
| .. code:: c | | | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vuninitializedq_u16(void) | | | Qd -> result | |
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
| .. code:: c | | | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vuninitializedq_u32(void) | | | Qd -> result | |
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
| .. code:: c | | | :: | ``MVE`` |
| | | | | |
| uint64x2_t [__arm_]vuninitializedq_u64(void) | | | Qd -> result | |
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
| .. code:: c | | | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vuninitializedq_f16(void) | | | Qd -> result | |
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
| .. code:: c | | | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vuninitializedq_f32(void) | | | Qd -> result | |
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vuninitializedq(int8x16_t t) | t -> Do Not Evaluate | | Qd -> result | |
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vuninitializedq(int16x8_t t) | t -> Do Not Evaluate | | Qd -> result | |
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vuninitializedq(int32x4_t t) | t -> Do Not Evaluate | | Qd -> result | |
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vuninitializedq(int64x2_t t) | t -> Do Not Evaluate | | Qd -> result | |
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vuninitializedq(uint8x16_t t) | t -> Do Not Evaluate | | Qd -> result | |
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vuninitializedq(uint16x8_t t) | t -> Do Not Evaluate | | Qd -> result | |
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vuninitializedq(uint32x4_t t) | t -> Do Not Evaluate | | Qd -> result | |
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | | :: | ``MVE`` |
| | | | | |
| uint64x2_t [__arm_]vuninitializedq(uint64x2_t t) | t -> Do Not Evaluate | | Qd -> result | |
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vuninitializedq(float16x8_t t) | t -> Do Not Evaluate | | Qd -> result | |
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vuninitializedq(float32x4_t t) | t -> Do Not Evaluate | | Qd -> result | |
+--------------------------------------------------------+--------------------------+---------------+------------------+---------------------------+
Compare
=======
Equal to
~~~~~~~~
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+=============================================+========================+===========================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq[_f16]( | a -> Qn | VCMP.F16 eq, Qn, Qm | Rd -> result | |
| float16x8_t a, | b -> Qm | VMRS Rd, P0 | | |
| float16x8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq[_f32]( | a -> Qn | VCMP.F32 eq, Qn, Qm | Rd -> result | |
| float32x4_t a, | b -> Qm | VMRS Rd, P0 | | |
| float32x4_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq[_s8]( | a -> Qn | VCMP.I8 eq, Qn, Qm | Rd -> result | |
| int8x16_t a, | b -> Qm | VMRS Rd, P0 | | |
| int8x16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq[_s16]( | a -> Qn | VCMP.I16 eq, Qn, Qm | Rd -> result | |
| int16x8_t a, | b -> Qm | VMRS Rd, P0 | | |
| int16x8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq[_s32]( | a -> Qn | VCMP.I32 eq, Qn, Qm | Rd -> result | |
| int32x4_t a, | b -> Qm | VMRS Rd, P0 | | |
| int32x4_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq[_u8]( | a -> Qn | VCMP.I8 eq, Qn, Qm | Rd -> result | |
| uint8x16_t a, | b -> Qm | VMRS Rd, P0 | | |
| uint8x16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq[_u16]( | a -> Qn | VCMP.I16 eq, Qn, Qm | Rd -> result | |
| uint16x8_t a, | b -> Qm | VMRS Rd, P0 | | |
| uint16x8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq[_u32]( | a -> Qn | VCMP.I32 eq, Qn, Qm | Rd -> result | |
| uint32x4_t a, | b -> Qm | VMRS Rd, P0 | | |
| uint32x4_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq[_n_f16]( | a -> Qn | VCMP.F16 eq, Qn, Rm | Rd -> result | |
| float16x8_t a, | b -> Rm | VMRS Rd, P0 | | |
| float16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq[_n_f32]( | a -> Qn | VCMP.F32 eq, Qn, Rm | Rd -> result | |
| float32x4_t a, | b -> Rm | VMRS Rd, P0 | | |
| float32_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq[_n_s8]( | a -> Qn | VCMP.I8 eq, Qn, Rm | Rd -> result | |
| int8x16_t a, | b -> Rm | VMRS Rd, P0 | | |
| int8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq[_n_s16]( | a -> Qn | VCMP.I16 eq, Qn, Rm | Rd -> result | |
| int16x8_t a, | b -> Rm | VMRS Rd, P0 | | |
| int16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq[_n_s32]( | a -> Qn | VCMP.I32 eq, Qn, Rm | Rd -> result | |
| int32x4_t a, | b -> Rm | VMRS Rd, P0 | | |
| int32_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq[_n_u8]( | a -> Qn | VCMP.I8 eq, Qn, Rm | Rd -> result | |
| uint8x16_t a, | b -> Rm | VMRS Rd, P0 | | |
| uint8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq[_n_u16]( | a -> Qn | VCMP.I16 eq, Qn, Rm | Rd -> result | |
| uint16x8_t a, | b -> Rm | VMRS Rd, P0 | | |
| uint16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq[_n_u32]( | a -> Qn | VCMP.I32 eq, Qn, Rm | Rd -> result | |
| uint32x4_t a, | b -> Rm | VMRS Rd, P0 | | |
| uint32_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq_m[_f16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VCMPT.F16 eq, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq_m[_f32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VCMPT.F32 eq, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq_m[_s8]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VCMPT.I8 eq, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq_m[_s16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VCMPT.I16 eq, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq_m[_s32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VCMPT.I32 eq, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq_m[_u8]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VCMPT.I8 eq, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq_m[_u16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VCMPT.I16 eq, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq_m[_u32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VCMPT.I32 eq, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq_m[_n_f16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float16x8_t a, | b -> Rm | VPST | | |
| float16_t b, | p -> Rp | VCMPT.F16 eq, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq_m[_n_f32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float32x4_t a, | b -> Rm | VPST | | |
| float32_t b, | p -> Rp | VCMPT.F32 eq, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq_m[_n_s8]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int8x16_t a, | b -> Rm | VPST | | |
| int8_t b, | p -> Rp | VCMPT.I8 eq, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq_m[_n_s16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int16x8_t a, | b -> Rm | VPST | | |
| int16_t b, | p -> Rp | VCMPT.I16 eq, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq_m[_n_s32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VCMPT.I32 eq, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq_m[_n_u8]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint8x16_t a, | b -> Rm | VPST | | |
| uint8_t b, | p -> Rp | VCMPT.I8 eq, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq_m[_n_u16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint16x8_t a, | b -> Rm | VPST | | |
| uint16_t b, | p -> Rp | VCMPT.I16 eq, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpeqq_m[_n_u32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint32x4_t a, | b -> Rm | VPST | | |
| uint32_t b, | p -> Rp | VCMPT.I32 eq, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
Not equal to
~~~~~~~~~~~~
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+=============================================+========================+===========================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq[_f16]( | a -> Qn | VCMP.F16 ne, Qn, Qm | Rd -> result | |
| float16x8_t a, | b -> Qm | VMRS Rd, P0 | | |
| float16x8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq[_f32]( | a -> Qn | VCMP.F32 ne, Qn, Qm | Rd -> result | |
| float32x4_t a, | b -> Qm | VMRS Rd, P0 | | |
| float32x4_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq[_s8]( | a -> Qn | VCMP.I8 ne, Qn, Qm | Rd -> result | |
| int8x16_t a, | b -> Qm | VMRS Rd, P0 | | |
| int8x16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq[_s16]( | a -> Qn | VCMP.I16 ne, Qn, Qm | Rd -> result | |
| int16x8_t a, | b -> Qm | VMRS Rd, P0 | | |
| int16x8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq[_s32]( | a -> Qn | VCMP.I32 ne, Qn, Qm | Rd -> result | |
| int32x4_t a, | b -> Qm | VMRS Rd, P0 | | |
| int32x4_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq[_u8]( | a -> Qn | VCMP.I8 ne, Qn, Qm | Rd -> result | |
| uint8x16_t a, | b -> Qm | VMRS Rd, P0 | | |
| uint8x16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq[_u16]( | a -> Qn | VCMP.I16 ne, Qn, Qm | Rd -> result | |
| uint16x8_t a, | b -> Qm | VMRS Rd, P0 | | |
| uint16x8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq[_u32]( | a -> Qn | VCMP.I32 ne, Qn, Qm | Rd -> result | |
| uint32x4_t a, | b -> Qm | VMRS Rd, P0 | | |
| uint32x4_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq_m[_f16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VCMPT.F16 ne, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq_m[_f32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VCMPT.F32 ne, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq_m[_s8]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VCMPT.I8 ne, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq_m[_s16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VCMPT.I16 ne, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq_m[_s32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VCMPT.I32 ne, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq_m[_u8]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VCMPT.I8 ne, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq_m[_u16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VCMPT.I16 ne, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq_m[_u32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VCMPT.I32 ne, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq[_n_f16]( | a -> Qn | VCMP.F16 ne, Qn, Rm | Rd -> result | |
| float16x8_t a, | b -> Rm | VMRS Rd, P0 | | |
| float16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq[_n_f32]( | a -> Qn | VCMP.F32 ne, Qn, Rm | Rd -> result | |
| float32x4_t a, | b -> Rm | VMRS Rd, P0 | | |
| float32_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq[_n_s8]( | a -> Qn | VCMP.I8 ne, Qn, Rm | Rd -> result | |
| int8x16_t a, | b -> Rm | VMRS Rd, P0 | | |
| int8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq[_n_s16]( | a -> Qn | VCMP.I16 ne, Qn, Rm | Rd -> result | |
| int16x8_t a, | b -> Rm | VMRS Rd, P0 | | |
| int16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq[_n_s32]( | a -> Qn | VCMP.I32 ne, Qn, Rm | Rd -> result | |
| int32x4_t a, | b -> Rm | VMRS Rd, P0 | | |
| int32_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq[_n_u8]( | a -> Qn | VCMP.I8 ne, Qn, Rm | Rd -> result | |
| uint8x16_t a, | b -> Rm | VMRS Rd, P0 | | |
| uint8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq[_n_u16]( | a -> Qn | VCMP.I16 ne, Qn, Rm | Rd -> result | |
| uint16x8_t a, | b -> Rm | VMRS Rd, P0 | | |
| uint16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq[_n_u32]( | a -> Qn | VCMP.I32 ne, Qn, Rm | Rd -> result | |
| uint32x4_t a, | b -> Rm | VMRS Rd, P0 | | |
| uint32_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq_m[_n_f16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float16x8_t a, | b -> Rm | VPST | | |
| float16_t b, | p -> Rp | VCMPT.F16 ne, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq_m[_n_f32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float32x4_t a, | b -> Rm | VPST | | |
| float32_t b, | p -> Rp | VCMPT.F32 ne, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq_m[_n_s8]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int8x16_t a, | b -> Rm | VPST | | |
| int8_t b, | p -> Rp | VCMPT.I8 ne, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq_m[_n_s16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int16x8_t a, | b -> Rm | VPST | | |
| int16_t b, | p -> Rp | VCMPT.I16 ne, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq_m[_n_s32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VCMPT.I32 ne, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq_m[_n_u8]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint8x16_t a, | b -> Rm | VPST | | |
| uint8_t b, | p -> Rp | VCMPT.I8 ne, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq_m[_n_u16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint16x8_t a, | b -> Rm | VPST | | |
| uint16_t b, | p -> Rp | VCMPT.I16 ne, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpneq_m[_n_u32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint32x4_t a, | b -> Rm | VPST | | |
| uint32_t b, | p -> Rp | VCMPT.I32 ne, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
Greater than or equal to
~~~~~~~~~~~~~~~~~~~~~~~~
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+=============================================+========================+===========================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgeq[_f16]( | a -> Qn | VCMP.F16 ge, Qn, Qm | Rd -> result | |
| float16x8_t a, | b -> Qm | VMRS Rd, P0 | | |
| float16x8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgeq[_f32]( | a -> Qn | VCMP.F32 ge, Qn, Qm | Rd -> result | |
| float32x4_t a, | b -> Qm | VMRS Rd, P0 | | |
| float32x4_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgeq[_s8]( | a -> Qn | VCMP.S8 ge, Qn, Qm | Rd -> result | |
| int8x16_t a, | b -> Qm | VMRS Rd, P0 | | |
| int8x16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgeq[_s16]( | a -> Qn | VCMP.S16 ge, Qn, Qm | Rd -> result | |
| int16x8_t a, | b -> Qm | VMRS Rd, P0 | | |
| int16x8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgeq[_s32]( | a -> Qn | VCMP.S32 ge, Qn, Qm | Rd -> result | |
| int32x4_t a, | b -> Qm | VMRS Rd, P0 | | |
| int32x4_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgeq_m[_f16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VCMPT.F16 ge, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgeq_m[_f32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VCMPT.F32 ge, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgeq_m[_s8]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VCMPT.S8 ge, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgeq_m[_s16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VCMPT.S16 ge, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgeq_m[_s32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VCMPT.S32 ge, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgeq[_n_f16]( | a -> Qn | VCMP.F16 ge, Qn, Rm | Rd -> result | |
| float16x8_t a, | b -> Rm | VMRS Rd, P0 | | |
| float16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgeq[_n_f32]( | a -> Qn | VCMP.F32 ge, Qn, Rm | Rd -> result | |
| float32x4_t a, | b -> Rm | VMRS Rd, P0 | | |
| float32_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgeq[_n_s8]( | a -> Qn | VCMP.S8 ge, Qn, Rm | Rd -> result | |
| int8x16_t a, | b -> Rm | VMRS Rd, P0 | | |
| int8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgeq[_n_s16]( | a -> Qn | VCMP.S16 ge, Qn, Rm | Rd -> result | |
| int16x8_t a, | b -> Rm | VMRS Rd, P0 | | |
| int16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgeq[_n_s32]( | a -> Qn | VCMP.S32 ge, Qn, Rm | Rd -> result | |
| int32x4_t a, | b -> Rm | VMRS Rd, P0 | | |
| int32_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgeq_m[_n_f16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float16x8_t a, | b -> Rm | VPST | | |
| float16_t b, | p -> Rp | VCMPT.F16 ge, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgeq_m[_n_f32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float32x4_t a, | b -> Rm | VPST | | |
| float32_t b, | p -> Rp | VCMPT.F32 ge, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgeq_m[_n_s8]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int8x16_t a, | b -> Rm | VPST | | |
| int8_t b, | p -> Rp | VCMPT.S8 ge, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgeq_m[_n_s16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int16x8_t a, | b -> Rm | VPST | | |
| int16_t b, | p -> Rp | VCMPT.S16 ge, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgeq_m[_n_s32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VCMPT.S32 ge, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpcsq[_u8]( | a -> Qn | VCMP.U8 cs, Qn, Qm | Rd -> result | |
| uint8x16_t a, | b -> Qm | VMRS Rd, P0 | | |
| uint8x16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpcsq[_u16]( | a -> Qn | VCMP.U16 cs, Qn, Qm | Rd -> result | |
| uint16x8_t a, | b -> Qm | VMRS Rd, P0 | | |
| uint16x8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpcsq[_u32]( | a -> Qn | VCMP.U32 cs, Qn, Qm | Rd -> result | |
| uint32x4_t a, | b -> Qm | VMRS Rd, P0 | | |
| uint32x4_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpcsq_m[_u8]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VCMPT.U8 cs, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpcsq_m[_u16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VCMPT.U16 cs, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpcsq_m[_u32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VCMPT.U32 cs, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpcsq[_n_u8]( | a -> Qn | VCMP.U8 cs, Qn, Rm | Rd -> result | |
| uint8x16_t a, | b -> Rm | VMRS Rd, P0 | | |
| uint8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpcsq[_n_u16]( | a -> Qn | VCMP.U16 cs, Qn, Rm | Rd -> result | |
| uint16x8_t a, | b -> Rm | VMRS Rd, P0 | | |
| uint16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpcsq[_n_u32]( | a -> Qn | VCMP.U32 cs, Qn, Rm | Rd -> result | |
| uint32x4_t a, | b -> Rm | VMRS Rd, P0 | | |
| uint32_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpcsq_m[_n_u8]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint8x16_t a, | b -> Rm | VPST | | |
| uint8_t b, | p -> Rp | VCMPT.U8 cs, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpcsq_m[_n_u16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint16x8_t a, | b -> Rm | VPST | | |
| uint16_t b, | p -> Rp | VCMPT.U16 cs, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpcsq_m[_n_u32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint32x4_t a, | b -> Rm | VPST | | |
| uint32_t b, | p -> Rp | VCMPT.U32 cs, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
Greater than
~~~~~~~~~~~~
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+=============================================+========================+===========================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgtq[_f16]( | a -> Qn | VCMP.F16 gt, Qn, Qm | Rd -> result | |
| float16x8_t a, | b -> Qm | VMRS Rd, P0 | | |
| float16x8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgtq[_f32]( | a -> Qn | VCMP.F32 gt, Qn, Qm | Rd -> result | |
| float32x4_t a, | b -> Qm | VMRS Rd, P0 | | |
| float32x4_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgtq[_s8]( | a -> Qn | VCMP.S8 gt, Qn, Qm | Rd -> result | |
| int8x16_t a, | b -> Qm | VMRS Rd, P0 | | |
| int8x16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgtq[_s16]( | a -> Qn | VCMP.S16 gt, Qn, Qm | Rd -> result | |
| int16x8_t a, | b -> Qm | VMRS Rd, P0 | | |
| int16x8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgtq[_s32]( | a -> Qn | VCMP.S32 gt, Qn, Qm | Rd -> result | |
| int32x4_t a, | b -> Qm | VMRS Rd, P0 | | |
| int32x4_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgtq_m[_f16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VCMPT.F16 gt, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgtq_m[_f32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VCMPT.F32 gt, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgtq_m[_s8]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VCMPT.S8 gt, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgtq_m[_s16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VCMPT.S16 gt, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgtq_m[_s32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VCMPT.S32 gt, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgtq[_n_f16]( | a -> Qn | VCMP.F16 gt, Qn, Rm | Rd -> result | |
| float16x8_t a, | b -> Rm | VMRS Rd, P0 | | |
| float16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgtq[_n_f32]( | a -> Qn | VCMP.F32 gt, Qn, Rm | Rd -> result | |
| float32x4_t a, | b -> Rm | VMRS Rd, P0 | | |
| float32_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgtq[_n_s8]( | a -> Qn | VCMP.S8 gt, Qn, Rm | Rd -> result | |
| int8x16_t a, | b -> Rm | VMRS Rd, P0 | | |
| int8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgtq[_n_s16]( | a -> Qn | VCMP.S16 gt, Qn, Rm | Rd -> result | |
| int16x8_t a, | b -> Rm | VMRS Rd, P0 | | |
| int16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgtq[_n_s32]( | a -> Qn | VCMP.S32 gt, Qn, Rm | Rd -> result | |
| int32x4_t a, | b -> Rm | VMRS Rd, P0 | | |
| int32_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgtq_m[_n_f16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float16x8_t a, | b -> Rm | VPST | | |
| float16_t b, | p -> Rp | VCMPT.F16 gt, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgtq_m[_n_f32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float32x4_t a, | b -> Rm | VPST | | |
| float32_t b, | p -> Rp | VCMPT.F32 gt, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgtq_m[_n_s8]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int8x16_t a, | b -> Rm | VPST | | |
| int8_t b, | p -> Rp | VCMPT.S8 gt, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgtq_m[_n_s16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int16x8_t a, | b -> Rm | VPST | | |
| int16_t b, | p -> Rp | VCMPT.S16 gt, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpgtq_m[_n_s32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VCMPT.S32 gt, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmphiq[_u8]( | a -> Qn | VCMP.U8 hi, Qn, Qm | Rd -> result | |
| uint8x16_t a, | b -> Qm | VMRS Rd, P0 | | |
| uint8x16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmphiq[_u16]( | a -> Qn | VCMP.U16 hi, Qn, Qm | Rd -> result | |
| uint16x8_t a, | b -> Qm | VMRS Rd, P0 | | |
| uint16x8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmphiq[_u32]( | a -> Qn | VCMP.U32 hi, Qn, Qm | Rd -> result | |
| uint32x4_t a, | b -> Qm | VMRS Rd, P0 | | |
| uint32x4_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmphiq_m[_u8]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VCMPT.U8 hi, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmphiq_m[_u16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VCMPT.U16 hi, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmphiq_m[_u32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VCMPT.U32 hi, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmphiq[_n_u8]( | a -> Qn | VCMP.U8 hi, Qn, Rm | Rd -> result | |
| uint8x16_t a, | b -> Rm | VMRS Rd, P0 | | |
| uint8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmphiq[_n_u16]( | a -> Qn | VCMP.U16 hi, Qn, Rm | Rd -> result | |
| uint16x8_t a, | b -> Rm | VMRS Rd, P0 | | |
| uint16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmphiq[_n_u32]( | a -> Qn | VCMP.U32 hi, Qn, Rm | Rd -> result | |
| uint32x4_t a, | b -> Rm | VMRS Rd, P0 | | |
| uint32_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmphiq_m[_n_u8]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint8x16_t a, | b -> Rm | VPST | | |
| uint8_t b, | p -> Rp | VCMPT.U8 hi, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmphiq_m[_n_u16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint16x8_t a, | b -> Rm | VPST | | |
| uint16_t b, | p -> Rp | VCMPT.U16 hi, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmphiq_m[_n_u32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| uint32x4_t a, | b -> Rm | VPST | | |
| uint32_t b, | p -> Rp | VCMPT.U32 hi, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
Less than or equal to
~~~~~~~~~~~~~~~~~~~~~
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+=============================================+========================+===========================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpleq[_f16]( | a -> Qn | VCMP.F16 le, Qn, Qm | Rd -> result | |
| float16x8_t a, | b -> Qm | VMRS Rd, P0 | | |
| float16x8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpleq[_f32]( | a -> Qn | VCMP.F32 le, Qn, Qm | Rd -> result | |
| float32x4_t a, | b -> Qm | VMRS Rd, P0 | | |
| float32x4_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpleq[_s8]( | a -> Qn | VCMP.S8 le, Qn, Qm | Rd -> result | |
| int8x16_t a, | b -> Qm | VMRS Rd, P0 | | |
| int8x16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpleq[_s16]( | a -> Qn | VCMP.S16 le, Qn, Qm | Rd -> result | |
| int16x8_t a, | b -> Qm | VMRS Rd, P0 | | |
| int16x8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpleq[_s32]( | a -> Qn | VCMP.S32 le, Qn, Qm | Rd -> result | |
| int32x4_t a, | b -> Qm | VMRS Rd, P0 | | |
| int32x4_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpleq_m[_f16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VCMPT.F16 le, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpleq_m[_f32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VCMPT.F32 le, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpleq_m[_s8]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VCMPT.S8 le, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpleq_m[_s16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VCMPT.S16 le, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpleq_m[_s32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VCMPT.S32 le, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpleq[_n_f16]( | a -> Qn | VCMP.F16 le, Qn, Rm | Rd -> result | |
| float16x8_t a, | b -> Rm | VMRS Rd, P0 | | |
| float16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpleq[_n_f32]( | a -> Qn | VCMP.F32 le, Qn, Rm | Rd -> result | |
| float32x4_t a, | b -> Rm | VMRS Rd, P0 | | |
| float32_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpleq[_n_s8]( | a -> Qn | VCMP.S8 le, Qn, Rm | Rd -> result | |
| int8x16_t a, | b -> Rm | VMRS Rd, P0 | | |
| int8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpleq[_n_s16]( | a -> Qn | VCMP.S16 le, Qn, Rm | Rd -> result | |
| int16x8_t a, | b -> Rm | VMRS Rd, P0 | | |
| int16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpleq[_n_s32]( | a -> Qn | VCMP.S32 le, Qn, Rm | Rd -> result | |
| int32x4_t a, | b -> Rm | VMRS Rd, P0 | | |
| int32_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpleq_m[_n_f16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float16x8_t a, | b -> Rm | VPST | | |
| float16_t b, | p -> Rp | VCMPT.F16 le, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpleq_m[_n_f32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float32x4_t a, | b -> Rm | VPST | | |
| float32_t b, | p -> Rp | VCMPT.F32 le, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpleq_m[_n_s8]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int8x16_t a, | b -> Rm | VPST | | |
| int8_t b, | p -> Rp | VCMPT.S8 le, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpleq_m[_n_s16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int16x8_t a, | b -> Rm | VPST | | |
| int16_t b, | p -> Rp | VCMPT.S16 le, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpleq_m[_n_s32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VCMPT.S32 le, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
Less than
~~~~~~~~~
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+=============================================+========================+===========================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpltq[_f16]( | a -> Qn | VCMP.F16 lt, Qn, Qm | Rd -> result | |
| float16x8_t a, | b -> Qm | VMRS Rd, P0 | | |
| float16x8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpltq[_f32]( | a -> Qn | VCMP.F32 lt, Qn, Qm | Rd -> result | |
| float32x4_t a, | b -> Qm | VMRS Rd, P0 | | |
| float32x4_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpltq[_s8]( | a -> Qn | VCMP.S8 lt, Qn, Qm | Rd -> result | |
| int8x16_t a, | b -> Qm | VMRS Rd, P0 | | |
| int8x16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpltq[_s16]( | a -> Qn | VCMP.S16 lt, Qn, Qm | Rd -> result | |
| int16x8_t a, | b -> Qm | VMRS Rd, P0 | | |
| int16x8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpltq[_s32]( | a -> Qn | VCMP.S32 lt, Qn, Qm | Rd -> result | |
| int32x4_t a, | b -> Qm | VMRS Rd, P0 | | |
| int32x4_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpltq_m[_f16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VCMPT.F16 lt, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpltq_m[_f32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VCMPT.F32 lt, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpltq_m[_s8]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VCMPT.S8 lt, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpltq_m[_s16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VCMPT.S16 lt, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpltq_m[_s32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VCMPT.S32 lt, Qn, Qm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpltq[_n_f16]( | a -> Qn | VCMP.F16 lt, Qn, Rm | Rd -> result | |
| float16x8_t a, | b -> Rm | VMRS Rd, P0 | | |
| float16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpltq[_n_f32]( | a -> Qn | VCMP.F32 lt, Qn, Rm | Rd -> result | |
| float32x4_t a, | b -> Rm | VMRS Rd, P0 | | |
| float32_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpltq[_n_s8]( | a -> Qn | VCMP.S8 lt, Qn, Rm | Rd -> result | |
| int8x16_t a, | b -> Rm | VMRS Rd, P0 | | |
| int8_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpltq[_n_s16]( | a -> Qn | VCMP.S16 lt, Qn, Rm | Rd -> result | |
| int16x8_t a, | b -> Rm | VMRS Rd, P0 | | |
| int16_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpltq[_n_s32]( | a -> Qn | VCMP.S32 lt, Qn, Rm | Rd -> result | |
| int32x4_t a, | b -> Rm | VMRS Rd, P0 | | |
| int32_t b) | | | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpltq_m[_n_f16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float16x8_t a, | b -> Rm | VPST | | |
| float16_t b, | p -> Rp | VCMPT.F16 lt, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpltq_m[_n_f32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| float32x4_t a, | b -> Rm | VPST | | |
| float32_t b, | p -> Rp | VCMPT.F32 lt, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpltq_m[_n_s8]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int8x16_t a, | b -> Rm | VPST | | |
| int8_t b, | p -> Rp | VCMPT.S8 lt, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpltq_m[_n_s16]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int16x8_t a, | b -> Rm | VPST | | |
| int16_t b, | p -> Rp | VCMPT.S16 lt, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vcmpltq_m[_n_s32]( | a -> Qn | VMSR P0, Rp | Rd -> result | |
| int32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VCMPT.S32 lt, Qn, Rm | | |
| mve_pred16_t p) | | VMRS Rd, P0 | | |
+---------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
Vector arithmetic
=================
Minimum
~~~~~~~
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+===========================================+========================+============================+===================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vminq[_s8]( | a -> Qn | VMIN.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vminq[_s16]( | a -> Qn | VMIN.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vminq[_s32]( | a -> Qn | VMIN.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vminq[_u8]( | a -> Qn | VMIN.U8 Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vminq[_u16]( | a -> Qn | VMIN.U16 Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vminq[_u32]( | a -> Qn | VMIN.U32 Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vminq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VMINT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vminq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VMINT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vminq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VMINT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vminq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VMINT.U8 Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vminq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VMINT.U16 Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vminq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VMINT.U32 Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vminq_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VMINT.S8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vminq_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VMINT.S16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vminq_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VMINT.S32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vminq_x[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VMINT.U8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vminq_x[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VMINT.U16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vminq_x[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VMINT.U32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vminaq[_s8]( | a -> Qda | VMINA.S8 Qda, Qm | Qda -> result | |
| uint8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vminaq[_s16]( | a -> Qda | VMINA.S16 Qda, Qm | Qda -> result | |
| uint16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vminaq[_s32]( | a -> Qda | VMINA.S32 Qda, Qm | Qda -> result | |
| uint32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vminaq_m[_s8]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VMINAT.S8 Qda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vminaq_m[_s16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VMINAT.S16 Qda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vminaq_m[_s32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VMINAT.S32 Qda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8_t [__arm_]vminvq[_s8]( | a -> Rda | VMINV.S8 Rda, Qm | Rda -> result | |
| int8_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16_t [__arm_]vminvq[_s16]( | a -> Rda | VMINV.S16 Rda, Qm | Rda -> result | |
| int16_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vminvq[_s32]( | a -> Rda | VMINV.S32 Rda, Qm | Rda -> result | |
| int32_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8_t [__arm_]vminvq[_u8]( | a -> Rda | VMINV.U8 Rda, Qm | Rda -> result | |
| uint8_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16_t [__arm_]vminvq[_u16]( | a -> Rda | VMINV.U16 Rda, Qm | Rda -> result | |
| uint16_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vminvq[_u32]( | a -> Rda | VMINV.U32 Rda, Qm | Rda -> result | |
| uint32_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8_t [__arm_]vminvq_p[_s8]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| int8_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VMINVT.S8 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16_t [__arm_]vminvq_p[_s16]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| int16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VMINVT.S16 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vminvq_p[_s32]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| int32_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VMINVT.S32 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8_t [__arm_]vminvq_p[_u8]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint8_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VMINVT.U8 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16_t [__arm_]vminvq_p[_u16]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint16_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VMINVT.U16 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vminvq_p[_u32]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint32_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VMINVT.U32 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8_t [__arm_]vminavq[_s8]( | a -> Rda | VMINAV.S8 Rda, Qm | Rda -> result | |
| uint8_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16_t [__arm_]vminavq[_s16]( | a -> Rda | VMINAV.S16 Rda, Qm | Rda -> result | |
| uint16_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vminavq[_s32]( | a -> Rda | VMINAV.S32 Rda, Qm | Rda -> result | |
| uint32_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8_t [__arm_]vminavq_p[_s8]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint8_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VMINAVT.S8 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16_t [__arm_]vminavq_p[_s16]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VMINAVT.S16 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vminavq_p[_s32]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint32_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VMINAVT.S32 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vminnmq[_f16]( | a -> Qn | VMINNM.F16 Qd, Qn, Qm | Qd -> result | |
| float16x8_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vminnmq[_f32]( | a -> Qn | VMINNM.F32 Qd, Qn, Qm | Qd -> result | |
| float32x4_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vminnmq_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Qm | VMINNMT.F16 Qd, Qn, Qm | | |
| float16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vminnmq_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Qm | VMINNMT.F32 Qd, Qn, Qm | | |
| float32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vminnmq_x[_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VMINNMT.F16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vminnmq_x[_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VMINNMT.F32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vminnmaq[_f16]( | a -> Qda | VMINNMA.F16 Qda, Qm | Qda -> result | |
| float16x8_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vminnmaq[_f32]( | a -> Qda | VMINNMA.F32 Qda, Qm | Qda -> result | |
| float32x4_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vminnmaq_m[_f16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VMINNMAT.F16 Qda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vminnmaq_m[_f32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VMINNMAT.F32 Qda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16_t [__arm_]vminnmvq[_f16]( | a -> Rda | VMINNMV.F16 Rda, Qm | Rda -> result | |
| float16_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32_t [__arm_]vminnmvq[_f32]( | a -> Rda | VMINNMV.F32 Rda, Qm | Rda -> result | |
| float32_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16_t [__arm_]vminnmvq_p[_f16]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| float16_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VMINNMVT.F16 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32_t [__arm_]vminnmvq_p[_f32]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| float32_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VMINNMVT.F32 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16_t [__arm_]vminnmavq[_f16]( | a -> Rda | VMINNMAV.F16 Rda, Qm | Rda -> result | |
| float16_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32_t [__arm_]vminnmavq[_f32]( | a -> Rda | VMINNMAV.F32 Rda, Qm | Rda -> result | |
| float32_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16_t [__arm_]vminnmavq_p[_f16]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| float16_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VMINNMAVT.F16 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32_t [__arm_]vminnmavq_p[_f32]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| float32_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VMINNMAVT.F32 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
Maximum
~~~~~~~
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+===========================================+========================+============================+===================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vmaxq[_s8]( | a -> Qn | VMAX.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vmaxq[_s16]( | a -> Qn | VMAX.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vmaxq[_s32]( | a -> Qn | VMAX.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vmaxq[_u8]( | a -> Qn | VMAX.U8 Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vmaxq[_u16]( | a -> Qn | VMAX.U16 Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vmaxq[_u32]( | a -> Qn | VMAX.U32 Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vmaxq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VMAXT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmaxq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VMAXT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmaxq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VMAXT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmaxq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VMAXT.U8 Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmaxq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VMAXT.U16 Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmaxq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VMAXT.U32 Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vmaxq_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VMAXT.S8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmaxq_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VMAXT.S16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmaxq_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VMAXT.S32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmaxq_x[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VMAXT.U8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmaxq_x[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VMAXT.U16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmaxq_x[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VMAXT.U32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmaxaq[_s8]( | a -> Qda | VMAXA.S8 Qda, Qm | Qda -> result | |
| uint8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmaxaq[_s16]( | a -> Qda | VMAXA.S16 Qda, Qm | Qda -> result | |
| uint16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmaxaq[_s32]( | a -> Qda | VMAXA.S32 Qda, Qm | Qda -> result | |
| uint32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmaxaq_m[_s8]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VMAXAT.S8 Qda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmaxaq_m[_s16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VMAXAT.S16 Qda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmaxaq_m[_s32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VMAXAT.S32 Qda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8_t [__arm_]vmaxvq[_s8]( | a -> Rda | VMAXV.S8 Rda, Qm | Rda -> result | |
| int8_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16_t [__arm_]vmaxvq[_s16]( | a -> Rda | VMAXV.S16 Rda, Qm | Rda -> result | |
| int16_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmaxvq[_s32]( | a -> Rda | VMAXV.S32 Rda, Qm | Rda -> result | |
| int32_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8_t [__arm_]vmaxvq[_u8]( | a -> Rda | VMAXV.U8 Rda, Qm | Rda -> result | |
| uint8_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16_t [__arm_]vmaxvq[_u16]( | a -> Rda | VMAXV.U16 Rda, Qm | Rda -> result | |
| uint16_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vmaxvq[_u32]( | a -> Rda | VMAXV.U32 Rda, Qm | Rda -> result | |
| uint32_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8_t [__arm_]vmaxvq_p[_s8]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| int8_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VMAXVT.S8 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16_t [__arm_]vmaxvq_p[_s16]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| int16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VMAXVT.S16 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmaxvq_p[_s32]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| int32_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VMAXVT.S32 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8_t [__arm_]vmaxvq_p[_u8]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint8_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VMAXVT.U8 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16_t [__arm_]vmaxvq_p[_u16]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint16_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VMAXVT.U16 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vmaxvq_p[_u32]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint32_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VMAXVT.U32 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8_t [__arm_]vmaxavq[_s8]( | a -> Rda | VMAXAV.S8 Rda, Qm | Rda -> result | |
| uint8_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16_t [__arm_]vmaxavq[_s16]( | a -> Rda | VMAXAV.S16 Rda, Qm | Rda -> result | |
| uint16_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vmaxavq[_s32]( | a -> Rda | VMAXAV.S32 Rda, Qm | Rda -> result | |
| uint32_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8_t [__arm_]vmaxavq_p[_s8]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint8_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VMAXAVT.S8 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16_t [__arm_]vmaxavq_p[_s16]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VMAXAVT.S16 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vmaxavq_p[_s32]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint32_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VMAXAVT.S32 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vmaxnmq[_f16]( | a -> Qn | VMAXNM.F16 Qd, Qn, Qm | Qd -> result | |
| float16x8_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vmaxnmq[_f32]( | a -> Qn | VMAXNM.F32 Qd, Qn, Qm | Qd -> result | |
| float32x4_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vmaxnmq_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Qm | VMAXNMT.F16 Qd, Qn, Qm | | |
| float16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vmaxnmq_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Qm | VMAXNMT.F32 Qd, Qn, Qm | | |
| float32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vmaxnmq_x[_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VMAXNMT.F16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vmaxnmq_x[_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VMAXNMT.F32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vmaxnmaq[_f16]( | a -> Qda | VMAXNMA.F16 Qda, Qm | Qda -> result | |
| float16x8_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vmaxnmaq[_f32]( | a -> Qda | VMAXNMA.F32 Qda, Qm | Qda -> result | |
| float32x4_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vmaxnmaq_m[_f16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VMAXNMAT.F16 Qda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vmaxnmaq_m[_f32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VMAXNMAT.F32 Qda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16_t [__arm_]vmaxnmvq[_f16]( | a -> Rda | VMAXNMV.F16 Rda, Qm | Rda -> result | |
| float16_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32_t [__arm_]vmaxnmvq[_f32]( | a -> Rda | VMAXNMV.F32 Rda, Qm | Rda -> result | |
| float32_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16_t [__arm_]vmaxnmvq_p[_f16]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| float16_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VMAXNMVT.F16 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32_t [__arm_]vmaxnmvq_p[_f32]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| float32_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VMAXNMVT.F32 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16_t [__arm_]vmaxnmavq[_f16]( | a -> Rda | VMAXNMAV.F16 Rda, Qm | Rda -> result | |
| float16_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32_t [__arm_]vmaxnmavq[_f32]( | a -> Rda | VMAXNMAV.F32 Rda, Qm | Rda -> result | |
| float32_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16_t [__arm_]vmaxnmavq_p[_f16]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| float16_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VMAXNMAVT.F16 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32_t [__arm_]vmaxnmavq_p[_f32]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| float32_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VMAXNMAVT.F32 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
Absolute
~~~~~~~~
Absolute difference and accumulate
----------------------------------
+--------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+======================================+========================+============================+===================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vabavq[_s8]( | a -> Rda | VABAV.S8 Rda, Qn, Qm | Rda -> result | |
| uint32_t a, | b -> Qn | | | |
| int8x16_t b, | c -> Qm | | | |
| int8x16_t c) | | | | |
+--------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vabavq[_s16]( | a -> Rda | VABAV.S16 Rda, Qn, Qm | Rda -> result | |
| uint32_t a, | b -> Qn | | | |
| int16x8_t b, | c -> Qm | | | |
| int16x8_t c) | | | | |
+--------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vabavq[_s32]( | a -> Rda | VABAV.S32 Rda, Qn, Qm | Rda -> result | |
| uint32_t a, | b -> Qn | | | |
| int32x4_t b, | c -> Qm | | | |
| int32x4_t c) | | | | |
+--------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vabavq[_u8]( | a -> Rda | VABAV.U8 Rda, Qn, Qm | Rda -> result | |
| uint32_t a, | b -> Qn | | | |
| uint8x16_t b, | c -> Qm | | | |
| uint8x16_t c) | | | | |
+--------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vabavq[_u16]( | a -> Rda | VABAV.U16 Rda, Qn, Qm | Rda -> result | |
| uint32_t a, | b -> Qn | | | |
| uint16x8_t b, | c -> Qm | | | |
| uint16x8_t c) | | | | |
+--------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vabavq[_u32]( | a -> Rda | VABAV.U32 Rda, Qn, Qm | Rda -> result | |
| uint32_t a, | b -> Qn | | | |
| uint32x4_t b, | c -> Qm | | | |
| uint32x4_t c) | | | | |
+--------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vabavq_p[_s8]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint32_t a, | b -> Qn | VPST | | |
| int8x16_t b, | c -> Qm | VABAVT.S8 Rda, Qn, Qm | | |
| int8x16_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vabavq_p[_s16]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint32_t a, | b -> Qn | VPST | | |
| int16x8_t b, | c -> Qm | VABAVT.S16 Rda, Qn, Qm | | |
| int16x8_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vabavq_p[_s32]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint32_t a, | b -> Qn | VPST | | |
| int32x4_t b, | c -> Qm | VABAVT.S32 Rda, Qn, Qm | | |
| int32x4_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vabavq_p[_u8]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint32_t a, | b -> Qn | VPST | | |
| uint8x16_t b, | c -> Qm | VABAVT.U8 Rda, Qn, Qm | | |
| uint8x16_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vabavq_p[_u16]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint32_t a, | b -> Qn | VPST | | |
| uint16x8_t b, | c -> Qm | VABAVT.U16 Rda, Qn, Qm | | |
| uint16x8_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vabavq_p[_u32]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint32_t a, | b -> Qn | VPST | | |
| uint32x4_t b, | c -> Qm | VABAVT.U32 Rda, Qn, Qm | | |
| uint32x4_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
Absolute difference
-------------------
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+========================================+========================+==========================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vabdq[_s8]( | a -> Qn | VABD.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vabdq[_s16]( | a -> Qn | VABD.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vabdq[_s32]( | a -> Qn | VABD.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vabdq[_u8]( | a -> Qn | VABD.U8 Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vabdq[_u16]( | a -> Qn | VABD.U16 Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vabdq[_u32]( | a -> Qn | VABD.U32 Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vabdq[_f16]( | a -> Qn | VABD.F16 Qd, Qn, Qm | Qd -> result | |
| float16x8_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vabdq[_f32]( | a -> Qn | VABD.F32 Qd, Qn, Qm | Qd -> result | |
| float32x4_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vabdq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VABDT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vabdq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VABDT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vabdq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VABDT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vabdq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VABDT.U8 Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vabdq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VABDT.U16 Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vabdq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VABDT.U32 Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vabdq_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Qm | VABDT.F16 Qd, Qn, Qm | | |
| float16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vabdq_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Qm | VABDT.F32 Qd, Qn, Qm | | |
| float32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vabdq_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VABDT.S8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vabdq_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VABDT.S16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vabdq_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VABDT.S32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vabdq_x[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VABDT.U8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vabdq_x[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VABDT.U16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vabdq_x[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VABDT.U32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vabdq_x[_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VABDT.F16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vabdq_x[_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VABDT.F32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+--------------------------+------------------+---------------------------+
Absolute value
--------------
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+====================================================+========================+=======================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vabsq[_f16](float16x8_t a) | a -> Qm | VABS.F16 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vabsq[_f32](float32x4_t a) | a -> Qm | VABS.F32 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vabsq[_s8](int8x16_t a) | a -> Qm | VABS.S8 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vabsq[_s16](int16x8_t a) | a -> Qm | VABS.S16 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vabsq[_s32](int32x4_t a) | a -> Qm | VABS.S32 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vabsq_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VABST.F16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vabsq_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | p -> Rp | VABST.F32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vabsq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | p -> Rp | VABST.S8 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vabsq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | p -> Rp | VABST.S16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vabsq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| int32x4_t a, | p -> Rp | VABST.S32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vabsq_x[_f16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VABST.F16 Qd, Qm | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vabsq_x[_f32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VABST.F32 Qd, Qm | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vabsq_x[_s8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VABST.S8 Qd, Qm | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vabsq_x[_s16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VABST.S16 Qd, Qm | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vabsq_x[_s32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VABST.S32 Qd, Qm | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vqabsq[_s8](int8x16_t a) | a -> Qm | VQABS.S8 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vqabsq[_s16](int16x8_t a) | a -> Qm | VQABS.S16 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vqabsq[_s32](int32x4_t a) | a -> Qm | VQABS.S32 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqabsq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | p -> Rp | VQABST.S8 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqabsq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | p -> Rp | VQABST.S16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqabsq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| int32x4_t a, | p -> Rp | VQABST.S32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
Add
~~~
Addition
--------
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+==================================================+=========================+===================================+=============================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vadciq[_s32]( | a -> Qn | VADCI.I32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | VMRS Rt, FPSCR_nzcvqc | Rt -> *carry_out | |
| int32x4_t b, | | LSR Rt, #29 | | |
| unsigned *carry_out) | | AND Rt, #1 | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vadciq[_u32]( | a -> Qn | VADCI.I32 Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | VMRS Rt, FPSCR_nzcvqc | Rt -> *carry_out | |
| uint32x4_t b, | | LSR Rt, #29 | | |
| unsigned *carry_out) | | AND Rt, #1 | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vadciq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | Rt -> *carry_out | |
| int32x4_t a, | b -> Qm | VADCIT.I32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | VMRS Rt, FPSCR_nzcvqc | | |
| unsigned *carry_out, | | LSR Rt, #29 | | |
| mve_pred16_t p) | | AND Rt, #1 | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vadciq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | Rt -> *carry_out | |
| uint32x4_t a, | b -> Qm | VADCIT.I32 Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | VMRS Rt, FPSCR_nzcvqc | | |
| unsigned *carry_out, | | LSR Rt, #29 | | |
| mve_pred16_t p) | | AND Rt, #1 | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vadcq[_s32]( | a -> Qn | VMRS Rs, FPSCR_nzcvqc | Qd -> result | |
| int32x4_t a, | b -> Qm | BFI Rs, Rt, #29, #1 | Rt -> *carry | |
| int32x4_t b, | *carry -> Rt | VMSR FPSCR_nzcvqc, Rs | | |
| unsigned *carry) | | VADC.I32 Qd, Qn, Qm | | |
| | | VMRS Rt, FPSCR_nzcvqc | | |
| | | LSR Rt, #29 | | |
| | | AND Rt, #1 | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vadcq[_u32]( | a -> Qn | VMRS Rs, FPSCR_nzcvqc | Qd -> result | |
| uint32x4_t a, | b -> Qm | BFI Rs, Rt, #29, #1 | Rt -> *carry | |
| uint32x4_t b, | *carry -> Rt | VMSR FPSCR_nzcvqc, Rs | | |
| unsigned *carry) | | VADC.I32 Qd, Qn, Qm | | |
| | | VMRS Rt, FPSCR_nzcvqc | | |
| | | LSR Rt, #29 | | |
| | | AND Rt, #1 | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vadcq_m[_s32]( | inactive -> Qd | VMRS Rs, FPSCR_nzcvqc | Qd -> result | |
| int32x4_t inactive, | a -> Qn | BFI Rs, Rt, #29, #1 | Rt -> *carry | |
| int32x4_t a, | b -> Qm | VMSR FPSCR_nzcvqc, Rs | | |
| int32x4_t b, | *carry -> Rt | VMSR P0, Rp | | |
| unsigned *carry, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VADCT.I32 Qd, Qn, Qm | | |
| | | VMRS Rt, FPSCR_nzcvqc | | |
| | | LSR Rt, #29 | | |
| | | AND Rt, #1 | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vadcq_m[_u32]( | inactive -> Qd | VMRS Rs, FPSCR_nzcvqc | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | BFI Rs, Rt, #29, #1 | Rt -> *carry | |
| uint32x4_t a, | b -> Qm | VMSR FPSCR_nzcvqc, Rs | | |
| uint32x4_t b, | *carry -> Rt | VMSR P0, Rp | | |
| unsigned *carry, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VADCT.I32 Qd, Qn, Qm | | |
| | | VMRS Rt, FPSCR_nzcvqc | | |
| | | LSR Rt, #29 | | |
| | | AND Rt, #1 | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vaddq[_f16]( | a -> Qn | VADD.F16 Qd, Qn, Qm | Qd -> result | |
| float16x8_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vaddq[_f32]( | a -> Qn | VADD.F32 Qd, Qn, Qm | Qd -> result | |
| float32x4_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vaddq[_n_f16]( | a -> Qn | VADD.F16 Qd, Qn, Rm | Qd -> result | |
| float16x8_t a, | b -> Rm | | | |
| float16_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vaddq[_n_f32]( | a -> Qn | VADD.F32 Qd, Qn, Rm | Qd -> result | |
| float32x4_t a, | b -> Rm | | | |
| float32_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vaddq[_s8]( | a -> Qn | VADD.I8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vaddq[_s16]( | a -> Qn | VADD.I16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vaddq[_s32]( | a -> Qn | VADD.I32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vaddq[_n_s8]( | a -> Qn | VADD.I8 Qd, Qn, Rm | Qd -> result | |
| int8x16_t a, | b -> Rm | | | |
| int8_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vaddq[_n_s16]( | a -> Qn | VADD.I16 Qd, Qn, Rm | Qd -> result | |
| int16x8_t a, | b -> Rm | | | |
| int16_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vaddq[_n_s32]( | a -> Qn | VADD.I32 Qd, Qn, Rm | Qd -> result | |
| int32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vaddq[_u8]( | a -> Qn | VADD.I8 Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vaddq[_u16]( | a -> Qn | VADD.I16 Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vaddq[_u32]( | a -> Qn | VADD.I32 Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vaddq[_n_u8]( | a -> Qn | VADD.I8 Qd, Qn, Rm | Qd -> result | |
| uint8x16_t a, | b -> Rm | | | |
| uint8_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vaddq[_n_u16]( | a -> Qn | VADD.I16 Qd, Qn, Rm | Qd -> result | |
| uint16x8_t a, | b -> Rm | | | |
| uint16_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vaddq[_n_u32]( | a -> Qn | VADD.I32 Qd, Qn, Rm | Qd -> result | |
| uint32x4_t a, | b -> Rm | | | |
| uint32_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vaddq_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Qm | VADDT.F16 Qd, Qn, Qm | | |
| float16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vaddq_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Qm | VADDT.F32 Qd, Qn, Qm | | |
| float32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vaddq_m[_n_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Rm | VADDT.F16 Qd, Qn, Rm | | |
| float16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vaddq_m[_n_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Rm | VADDT.F32 Qd, Qn, Rm | | |
| float32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vaddq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VADDT.I8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vaddq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VADDT.I16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vaddq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VADDT.I32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vaddq_m[_n_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Rm | VADDT.I8 Qd, Qn, Rm | | |
| int8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vaddq_m[_n_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Rm | VADDT.I16 Qd, Qn, Rm | | |
| int16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vaddq_m[_n_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Rm | VADDT.I32 Qd, Qn, Rm | | |
| int32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vaddq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VADDT.I8 Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vaddq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VADDT.I16 Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vaddq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VADDT.I32 Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vaddq_m[_n_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Rm | VADDT.I8 Qd, Qn, Rm | | |
| uint8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vaddq_m[_n_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Rm | VADDT.I16 Qd, Qn, Rm | | |
| uint16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vaddq_m[_n_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Rm | VADDT.I32 Qd, Qn, Rm | | |
| uint32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vaddq_x[_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VADDT.F16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vaddq_x[_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VADDT.F32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vaddq_x[_n_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Rm | VPST | | |
| float16_t b, | p -> Rp | VADDT.F16 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vaddq_x[_n_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Rm | VPST | | |
| float32_t b, | p -> Rp | VADDT.F32 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vaddq_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VADDT.I8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vaddq_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VADDT.I16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vaddq_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VADDT.I32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vaddq_x[_n_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Rm | VPST | | |
| int8_t b, | p -> Rp | VADDT.I8 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vaddq_x[_n_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Rm | VPST | | |
| int16_t b, | p -> Rp | VADDT.I16 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vaddq_x[_n_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VADDT.I32 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vaddq_x[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VADDT.I8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vaddq_x[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VADDT.I16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vaddq_x[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VADDT.I32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vaddq_x[_n_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Rm | VPST | | |
| uint8_t b, | p -> Rp | VADDT.I8 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vaddq_x[_n_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Rm | VPST | | |
| uint16_t b, | p -> Rp | VADDT.I16 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vaddq_x[_n_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Rm | VPST | | |
| uint32_t b, | p -> Rp | VADDT.I32 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vaddlvaq[_s32]( | a -> [RdaHi,RdaLo] | VADDLVA.S32 RdaLo, RdaHi, Qm | [RdaHi,RdaLo] -> result | |
| int64_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]vaddlvaq[_u32]( | a -> [RdaHi,RdaLo] | VADDLVA.U32 RdaLo, RdaHi, Qm | [RdaHi,RdaLo] -> result | |
| uint64_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vaddlvaq_p[_s32]( | a -> [RdaHi,RdaLo] | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int64_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VADDLVAT.S32 RdaLo, RdaHi, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]vaddlvaq_p[_u32]( | a -> [RdaHi,RdaLo] | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| uint64_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VADDLVAT.U32 RdaLo, RdaHi, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vaddlvq[_s32](int32x4_t a) | a -> Qm | VADDLV.S32 RdaLo, RdaHi, Qm | [RdaHi,RdaLo] -> result | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]vaddlvq[_u32](uint32x4_t a) | a -> Qm | VADDLV.U32 RdaLo, RdaHi, Qm | [RdaHi,RdaLo] -> result | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vaddlvq_p[_s32]( | a -> Qm | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VADDLVT.S32 RdaLo, RdaHi, Qm | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]vaddlvq_p[_u32]( | a -> Qm | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| uint32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VADDLVT.U32 RdaLo, RdaHi, Qm | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vaddvaq[_s8]( | a -> Rda | VADDVA.S8 Rda, Qm | Rda -> result | |
| int32_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vaddvaq[_s16]( | a -> Rda | VADDVA.S16 Rda, Qm | Rda -> result | |
| int32_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vaddvaq[_s32]( | a -> Rda | VADDVA.S32 Rda, Qm | Rda -> result | |
| int32_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vaddvaq[_u8]( | a -> Rda | VADDVA.U8 Rda, Qm | Rda -> result | |
| uint32_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vaddvaq[_u16]( | a -> Rda | VADDVA.U16 Rda, Qm | Rda -> result | |
| uint32_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vaddvaq[_u32]( | a -> Rda | VADDVA.U32 Rda, Qm | Rda -> result | |
| uint32_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vaddvaq_p[_s8]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| int32_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VADDVAT.S8 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vaddvaq_p[_s16]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| int32_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VADDVAT.S16 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vaddvaq_p[_s32]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| int32_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VADDVAT.S32 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vaddvaq_p[_u8]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint32_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VADDVAT.U8 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vaddvaq_p[_u16]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint32_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VADDVAT.U16 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vaddvaq_p[_u32]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| uint32_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VADDVAT.U32 Rda, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vaddvq[_s8](int8x16_t a) | a -> Qm | VADDV.S8 Rda, Qm | Rda -> result | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vaddvq[_s16](int16x8_t a) | a -> Qm | VADDV.S16 Rda, Qm | Rda -> result | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vaddvq[_s32](int32x4_t a) | a -> Qm | VADDV.S32 Rda, Qm | Rda -> result | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vaddvq[_u8](uint8x16_t a) | a -> Qm | VADDV.U8 Rda, Qm | Rda -> result | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vaddvq[_u16](uint16x8_t a) | a -> Qm | VADDV.U16 Rda, Qm | Rda -> result | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vaddvq[_u32](uint32x4_t a) | a -> Qm | VADDV.U32 Rda, Qm | Rda -> result | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vaddvq_p[_s8]( | a -> Qm | VMSR P0, Rp | Rda -> result | |
| int8x16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VADDVT.S8 Rda, Qm | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vaddvq_p[_s16]( | a -> Qm | VMSR P0, Rp | Rda -> result | |
| int16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VADDVT.S16 Rda, Qm | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vaddvq_p[_s32]( | a -> Qm | VMSR P0, Rp | Rda -> result | |
| int32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VADDVT.S32 Rda, Qm | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vaddvq_p[_u8]( | a -> Qm | VMSR P0, Rp | Rda -> result | |
| uint8x16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VADDVT.U8 Rda, Qm | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vaddvq_p[_u16]( | a -> Qm | VMSR P0, Rp | Rda -> result | |
| uint16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VADDVT.U16 Rda, Qm | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vaddvq_p[_u32]( | a -> Qm | VMSR P0, Rp | Rda -> result | |
| uint32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VADDVT.U32 Rda, Qm | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vhaddq[_n_s8]( | a -> Qn | VHADD.S8 Qd, Qn, Rm | Qd -> result | |
| int8x16_t a, | b -> Rm | | | |
| int8_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vhaddq[_n_s16]( | a -> Qn | VHADD.S16 Qd, Qn, Rm | Qd -> result | |
| int16x8_t a, | b -> Rm | | | |
| int16_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vhaddq[_n_s32]( | a -> Qn | VHADD.S32 Qd, Qn, Rm | Qd -> result | |
| int32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vhaddq[_n_u8]( | a -> Qn | VHADD.U8 Qd, Qn, Rm | Qd -> result | |
| uint8x16_t a, | b -> Rm | | | |
| uint8_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vhaddq[_n_u16]( | a -> Qn | VHADD.U16 Qd, Qn, Rm | Qd -> result | |
| uint16x8_t a, | b -> Rm | | | |
| uint16_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vhaddq[_n_u32]( | a -> Qn | VHADD.U32 Qd, Qn, Rm | Qd -> result | |
| uint32x4_t a, | b -> Rm | | | |
| uint32_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vhaddq[_s8]( | a -> Qn | VHADD.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vhaddq[_s16]( | a -> Qn | VHADD.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vhaddq[_s32]( | a -> Qn | VHADD.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vhaddq[_u8]( | a -> Qn | VHADD.U8 Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vhaddq[_u16]( | a -> Qn | VHADD.U16 Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vhaddq[_u32]( | a -> Qn | VHADD.U32 Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vhaddq_m[_n_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Rm | VHADDT.S8 Qd, Qn, Rm | | |
| int8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vhaddq_m[_n_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Rm | VHADDT.S16 Qd, Qn, Rm | | |
| int16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vhaddq_m[_n_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Rm | VHADDT.S32 Qd, Qn, Rm | | |
| int32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vhaddq_m[_n_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Rm | VHADDT.U8 Qd, Qn, Rm | | |
| uint8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vhaddq_m[_n_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Rm | VHADDT.U16 Qd, Qn, Rm | | |
| uint16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vhaddq_m[_n_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Rm | VHADDT.U32 Qd, Qn, Rm | | |
| uint32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vhaddq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VHADDT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vhaddq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VHADDT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vhaddq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VHADDT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vhaddq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VHADDT.U8 Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vhaddq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VHADDT.U16 Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vhaddq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VHADDT.U32 Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vhaddq_x[_n_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Rm | VPST | | |
| int8_t b, | p -> Rp | VHADDT.S8 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vhaddq_x[_n_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Rm | VPST | | |
| int16_t b, | p -> Rp | VHADDT.S16 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vhaddq_x[_n_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VHADDT.S32 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vhaddq_x[_n_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Rm | VPST | | |
| uint8_t b, | p -> Rp | VHADDT.U8 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vhaddq_x[_n_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Rm | VPST | | |
| uint16_t b, | p -> Rp | VHADDT.U16 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vhaddq_x[_n_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Rm | VPST | | |
| uint32_t b, | p -> Rp | VHADDT.U32 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vhaddq_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VHADDT.S8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vhaddq_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VHADDT.S16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vhaddq_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VHADDT.S32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vhaddq_x[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VHADDT.U8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vhaddq_x[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VHADDT.U16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vhaddq_x[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VHADDT.U32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vrhaddq[_s8]( | a -> Qn | VRHADD.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vrhaddq[_s16]( | a -> Qn | VRHADD.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vrhaddq[_s32]( | a -> Qn | VRHADD.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vrhaddq[_u8]( | a -> Qn | VRHADD.U8 Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vrhaddq[_u16]( | a -> Qn | VRHADD.U16 Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vrhaddq[_u32]( | a -> Qn | VRHADD.U32 Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrhaddq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VRHADDT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vrhaddq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VRHADDT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vrhaddq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VRHADDT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrhaddq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VRHADDT.U8 Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vrhaddq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VRHADDT.U16 Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vrhaddq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VRHADDT.U32 Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrhaddq_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VRHADDT.S8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vrhaddq_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VRHADDT.S16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vrhaddq_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VRHADDT.S32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrhaddq_x[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VRHADDT.U8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vrhaddq_x[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VRHADDT.U16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vrhaddq_x[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VRHADDT.U32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+-------------------------+-----------------------------------+-----------------------------+---------------------------+
Saturating addition
-------------------
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+==========================================+========================+===========================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqaddq[_n_s8]( | a -> Qn | VQADD.S8 Qd, Qn, Rm | Qd -> result | |
| int8x16_t a, | b -> Rm | | | |
| int8_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqaddq[_n_s16]( | a -> Qn | VQADD.S16 Qd, Qn, Rm | Qd -> result | |
| int16x8_t a, | b -> Rm | | | |
| int16_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqaddq[_n_s32]( | a -> Qn | VQADD.S32 Qd, Qn, Rm | Qd -> result | |
| int32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqaddq[_n_u8]( | a -> Qn | VQADD.U8 Qd, Qn, Rm | Qd -> result | |
| uint8x16_t a, | b -> Rm | | | |
| uint8_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqaddq[_n_u16]( | a -> Qn | VQADD.U16 Qd, Qn, Rm | Qd -> result | |
| uint16x8_t a, | b -> Rm | | | |
| uint16_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vqaddq[_n_u32]( | a -> Qn | VQADD.U32 Qd, Qn, Rm | Qd -> result | |
| uint32x4_t a, | b -> Rm | | | |
| uint32_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vqaddq[_s8]( | a -> Qn | VQADD.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vqaddq[_s16]( | a -> Qn | VQADD.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vqaddq[_s32]( | a -> Qn | VQADD.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vqaddq[_u8]( | a -> Qn | VQADD.U8 Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vqaddq[_u16]( | a -> Qn | VQADD.U16 Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vqaddq[_u32]( | a -> Qn | VQADD.U32 Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqaddq_m[_n_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Rm | VQADDT.S8 Qd, Qn, Rm | | |
| int8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqaddq_m[_n_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Rm | VQADDT.S16 Qd, Qn, Rm | | |
| int16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqaddq_m[_n_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Rm | VQADDT.S32 Qd, Qn, Rm | | |
| int32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqaddq_m[_n_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Rm | VQADDT.U8 Qd, Qn, Rm | | |
| uint8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqaddq_m[_n_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Rm | VQADDT.U16 Qd, Qn, Rm | | |
| uint16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vqaddq_m[_n_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Rm | VQADDT.U32 Qd, Qn, Rm | | |
| uint32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqaddq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VQADDT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqaddq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VQADDT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqaddq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VQADDT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqaddq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VQADDT.U8 Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqaddq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VQADDT.U16 Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vqaddq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VQADDT.U32 Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
Multiply
~~~~~~~~
Multiplication
--------------
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+==============================================+========================+============================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vmulhq[_s8]( | a -> Qn | VMULH.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmulhq[_s16]( | a -> Qn | VMULH.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmulhq[_s32]( | a -> Qn | VMULH.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmulhq[_u8]( | a -> Qn | VMULH.U8 Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmulhq[_u16]( | a -> Qn | VMULH.U16 Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmulhq[_u32]( | a -> Qn | VMULH.U32 Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vmulhq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VMULHT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmulhq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VMULHT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmulhq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VMULHT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmulhq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VMULHT.U8 Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmulhq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VMULHT.U16 Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmulhq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VMULHT.U32 Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vmulhq_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VMULHT.S8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmulhq_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VMULHT.S16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmulhq_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VMULHT.S32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmulhq_x[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VMULHT.U8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmulhq_x[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VMULHT.U16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmulhq_x[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VMULHT.U32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmullbq_poly[_p8]( | a -> Qn | VMULLB.P8 Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmullbq_poly[_p16]( | a -> Qn | VMULLB.P16 Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmullbq_int[_s8]( | a -> Qn | VMULLB.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmullbq_int[_s16]( | a -> Qn | VMULLB.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vmullbq_int[_s32]( | a -> Qn | VMULLB.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmullbq_int[_u8]( | a -> Qn | VMULLB.U8 Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmullbq_int[_u16]( | a -> Qn | VMULLB.U16 Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64x2_t [__arm_]vmullbq_int[_u32]( | a -> Qn | VMULLB.U32 Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmullbq_poly_m[_p8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VMULLBT.P8 Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmullbq_poly_m[_p16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VMULLBT.P16 Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmullbq_int_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VMULLBT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmullbq_int_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VMULLBT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vmullbq_int_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int64x2_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VMULLBT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmullbq_int_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VMULLBT.U8 Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmullbq_int_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VMULLBT.U16 Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64x2_t [__arm_]vmullbq_int_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint64x2_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VMULLBT.U32 Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmullbq_poly_x[_p8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VMULLBT.P8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmullbq_poly_x[_p16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VMULLBT.P16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmullbq_int_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VMULLBT.S8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmullbq_int_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VMULLBT.S16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vmullbq_int_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VMULLBT.S32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmullbq_int_x[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VMULLBT.U8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmullbq_int_x[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VMULLBT.U16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64x2_t [__arm_]vmullbq_int_x[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VMULLBT.U32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmulltq_poly[_p8]( | a -> Qn | VMULLT.P8 Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmulltq_poly[_p16]( | a -> Qn | VMULLT.P16 Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmulltq_int[_s8]( | a -> Qn | VMULLT.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmulltq_int[_s16]( | a -> Qn | VMULLT.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vmulltq_int[_s32]( | a -> Qn | VMULLT.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmulltq_int[_u8]( | a -> Qn | VMULLT.U8 Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmulltq_int[_u16]( | a -> Qn | VMULLT.U16 Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64x2_t [__arm_]vmulltq_int[_u32]( | a -> Qn | VMULLT.U32 Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmulltq_poly_m[_p8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VMULLTT.P8 Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmulltq_poly_m[_p16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VMULLTT.P16 Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmulltq_int_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VMULLTT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmulltq_int_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VMULLTT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vmulltq_int_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int64x2_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VMULLTT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmulltq_int_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VMULLTT.U8 Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmulltq_int_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VMULLTT.U16 Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64x2_t [__arm_]vmulltq_int_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint64x2_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VMULLTT.U32 Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmulltq_poly_x[_p8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VMULLTT.P8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmulltq_poly_x[_p16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VMULLTT.P16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmulltq_int_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VMULLTT.S8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmulltq_int_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VMULLTT.S16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vmulltq_int_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VMULLTT.S32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmulltq_int_x[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VMULLTT.U8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmulltq_int_x[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VMULLTT.U16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64x2_t [__arm_]vmulltq_int_x[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VMULLTT.U32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vmulq[_f16]( | a -> Qn | VMUL.F16 Qd, Qn, Qm | Qd -> result | |
| float16x8_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vmulq[_f32]( | a -> Qn | VMUL.F32 Qd, Qn, Qm | Qd -> result | |
| float32x4_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vmulq[_n_f16]( | a -> Qn | VMUL.F16 Qd, Qn, Rm | Qd -> result | |
| float16x8_t a, | b -> Rm | | | |
| float16_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vmulq[_n_f32]( | a -> Qn | VMUL.F32 Qd, Qn, Rm | Qd -> result | |
| float32x4_t a, | b -> Rm | | | |
| float32_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vmulq[_s8]( | a -> Qn | VMUL.I8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vmulq[_s16]( | a -> Qn | VMUL.I16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vmulq[_s32]( | a -> Qn | VMUL.I32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vmulq[_n_s8]( | a -> Qn | VMUL.I8 Qd, Qn, Rm | Qd -> result | |
| int8x16_t a, | b -> Rm | | | |
| int8_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vmulq[_n_s16]( | a -> Qn | VMUL.I16 Qd, Qn, Rm | Qd -> result | |
| int16x8_t a, | b -> Rm | | | |
| int16_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vmulq[_n_s32]( | a -> Qn | VMUL.I32 Qd, Qn, Rm | Qd -> result | |
| int32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vmulq[_u8]( | a -> Qn | VMUL.I8 Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vmulq[_u16]( | a -> Qn | VMUL.I16 Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vmulq[_u32]( | a -> Qn | VMUL.I32 Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vmulq[_n_u8]( | a -> Qn | VMUL.I8 Qd, Qn, Rm | Qd -> result | |
| uint8x16_t a, | b -> Rm | | | |
| uint8_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vmulq[_n_u16]( | a -> Qn | VMUL.I16 Qd, Qn, Rm | Qd -> result | |
| uint16x8_t a, | b -> Rm | | | |
| uint16_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vmulq[_n_u32]( | a -> Qn | VMUL.I32 Qd, Qn, Rm | Qd -> result | |
| uint32x4_t a, | b -> Rm | | | |
| uint32_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vmulq_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Qm | VMULT.F16 Qd, Qn, Qm | | |
| float16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vmulq_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Qm | VMULT.F32 Qd, Qn, Qm | | |
| float32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vmulq_m[_n_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Rm | VMULT.F16 Qd, Qn, Rm | | |
| float16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vmulq_m[_n_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Rm | VMULT.F32 Qd, Qn, Rm | | |
| float32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vmulq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VMULT.I8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmulq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VMULT.I16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmulq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VMULT.I32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vmulq_m[_n_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Rm | VMULT.I8 Qd, Qn, Rm | | |
| int8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmulq_m[_n_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Rm | VMULT.I16 Qd, Qn, Rm | | |
| int16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmulq_m[_n_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Rm | VMULT.I32 Qd, Qn, Rm | | |
| int32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmulq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VMULT.I8 Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmulq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VMULT.I16 Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmulq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VMULT.I32 Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmulq_m[_n_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Rm | VMULT.I8 Qd, Qn, Rm | | |
| uint8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmulq_m[_n_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Rm | VMULT.I16 Qd, Qn, Rm | | |
| uint16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmulq_m[_n_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Rm | VMULT.I32 Qd, Qn, Rm | | |
| uint32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vmulq_x[_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VMULT.F16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vmulq_x[_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VMULT.F32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vmulq_x[_n_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Rm | VPST | | |
| float16_t b, | p -> Rp | VMULT.F16 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vmulq_x[_n_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Rm | VPST | | |
| float32_t b, | p -> Rp | VMULT.F32 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vmulq_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VMULT.I8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmulq_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VMULT.I16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmulq_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VMULT.I32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vmulq_x[_n_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Rm | VPST | | |
| int8_t b, | p -> Rp | VMULT.I8 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmulq_x[_n_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Rm | VPST | | |
| int16_t b, | p -> Rp | VMULT.I16 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmulq_x[_n_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VMULT.I32 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmulq_x[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VMULT.I8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmulq_x[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VMULT.I16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmulq_x[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VMULT.I32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmulq_x[_n_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Rm | VPST | | |
| uint8_t b, | p -> Rp | VMULT.I8 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmulq_x[_n_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Rm | VPST | | |
| uint16_t b, | p -> Rp | VMULT.I16 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmulq_x[_n_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Rm | VPST | | |
| uint32_t b, | p -> Rp | VMULT.I32 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrmulhq[_s8]( | a -> Qn | VRMULH.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vrmulhq[_s16]( | a -> Qn | VRMULH.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vrmulhq[_s32]( | a -> Qn | VRMULH.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrmulhq[_u8]( | a -> Qn | VRMULH.U8 Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vrmulhq[_u16]( | a -> Qn | VRMULH.U16 Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vrmulhq[_u32]( | a -> Qn | VRMULH.U32 Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrmulhq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VRMULHT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vrmulhq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VRMULHT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vrmulhq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VRMULHT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrmulhq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VRMULHT.U8 Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vrmulhq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VRMULHT.U16 Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vrmulhq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VRMULHT.U32 Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrmulhq_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VRMULHT.S8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vrmulhq_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VRMULHT.S16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vrmulhq_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VRMULHT.S32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrmulhq_x[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VRMULHT.U8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vrmulhq_x[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VRMULHT.U16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vrmulhq_x[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VRMULHT.U32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+----------------------------+------------------+---------------------------+
Saturating multiply-accumulate
------------------------------
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+=============================================+========================+================================+===================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqdmladhq[_s8]( | inactive -> Qd | VQDMLADH.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t inactive, | a -> Qn | | | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqdmladhq[_s16]( | inactive -> Qd | VQDMLADH.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t inactive, | a -> Qn | | | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmladhq[_s32]( | inactive -> Qd | VQDMLADH.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t inactive, | a -> Qn | | | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqdmladhq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VQDMLADHT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqdmladhq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VQDMLADHT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmladhq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VQDMLADHT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqdmladhxq[_s8]( | inactive -> Qd | VQDMLADHX.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t inactive, | a -> Qn | | | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqdmladhxq[_s16]( | inactive -> Qd | VQDMLADHX.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t inactive, | a -> Qn | | | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmladhxq[_s32]( | inactive -> Qd | VQDMLADHX.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t inactive, | a -> Qn | | | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqdmladhxq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VQDMLADHXT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqdmladhxq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VQDMLADHXT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmladhxq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VQDMLADHXT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrdmladhq[_s8]( | inactive -> Qd | VQRDMLADH.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t inactive, | a -> Qn | | | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrdmladhq[_s16]( | inactive -> Qd | VQRDMLADH.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t inactive, | a -> Qn | | | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqrdmladhq[_s32]( | inactive -> Qd | VQRDMLADH.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t inactive, | a -> Qn | | | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrdmladhq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VQRDMLADHT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrdmladhq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VQRDMLADHT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqrdmladhq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VQRDMLADHT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrdmladhxq[_s8]( | inactive -> Qd | VQRDMLADHX.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t inactive, | a -> Qn | | | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrdmladhxq[_s16]( | inactive -> Qd | VQRDMLADHX.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t inactive, | a -> Qn | | | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqrdmladhxq[_s32]( | inactive -> Qd | VQRDMLADHX.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t inactive, | a -> Qn | | | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrdmladhxq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VQRDMLADHXT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrdmladhxq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VQRDMLADHXT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqrdmladhxq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VQRDMLADHXT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqdmlahq[_n_s8]( | add -> Qda | VQDMLAH.S8 Qda, Qn, Rm | Qda -> result | |
| int8x16_t add, | m1 -> Qn | | | |
| int8x16_t m1, | m2 -> Rm | | | |
| int8_t m2) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqdmlahq[_n_s16]( | add -> Qda | VQDMLAH.S16 Qda, Qn, Rm | Qda -> result | |
| int16x8_t add, | m1 -> Qn | | | |
| int16x8_t m1, | m2 -> Rm | | | |
| int16_t m2) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmlahq[_n_s32]( | add -> Qda | VQDMLAH.S32 Qda, Qn, Rm | Qda -> result | |
| int32x4_t add, | m1 -> Qn | | | |
| int32x4_t m1, | m2 -> Rm | | | |
| int32_t m2) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqdmlahq_m[_n_s8]( | add -> Qda | VMSR P0, Rp | Qda -> result | |
| int8x16_t add, | m1 -> Qn | VPST | | |
| int8x16_t m1, | m2 -> Rm | VQDMLAHT.S8 Qda, Qn, Rm | | |
| int8_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqdmlahq_m[_n_s16]( | add -> Qda | VMSR P0, Rp | Qda -> result | |
| int16x8_t add, | m1 -> Qn | VPST | | |
| int16x8_t m1, | m2 -> Rm | VQDMLAHT.S16 Qda, Qn, Rm | | |
| int16_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmlahq_m[_n_s32]( | add -> Qda | VMSR P0, Rp | Qda -> result | |
| int32x4_t add, | m1 -> Qn | VPST | | |
| int32x4_t m1, | m2 -> Rm | VQDMLAHT.S32 Qda, Qn, Rm | | |
| int32_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrdmlahq[_n_s8]( | add -> Qda | VQRDMLAH.S8 Qda, Qn, Rm | Qda -> result | |
| int8x16_t add, | m1 -> Qn | | | |
| int8x16_t m1, | m2 -> Rm | | | |
| int8_t m2) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrdmlahq[_n_s16]( | add -> Qda | VQRDMLAH.S16 Qda, Qn, Rm | Qda -> result | |
| int16x8_t add, | m1 -> Qn | | | |
| int16x8_t m1, | m2 -> Rm | | | |
| int16_t m2) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqrdmlahq[_n_s32]( | add -> Qda | VQRDMLAH.S32 Qda, Qn, Rm | Qda -> result | |
| int32x4_t add, | m1 -> Qn | | | |
| int32x4_t m1, | m2 -> Rm | | | |
| int32_t m2) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrdmlahq_m[_n_s8]( | add -> Qda | VMSR P0, Rp | Qda -> result | |
| int8x16_t add, | m1 -> Qn | VPST | | |
| int8x16_t m1, | m2 -> Rm | VQRDMLAHT.S8 Qda, Qn, Rm | | |
| int8_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrdmlahq_m[_n_s16]( | add -> Qda | VMSR P0, Rp | Qda -> result | |
| int16x8_t add, | m1 -> Qn | VPST | | |
| int16x8_t m1, | m2 -> Rm | VQRDMLAHT.S16 Qda, Qn, Rm | | |
| int16_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqrdmlahq_m[_n_s32]( | add -> Qda | VMSR P0, Rp | Qda -> result | |
| int32x4_t add, | m1 -> Qn | VPST | | |
| int32x4_t m1, | m2 -> Rm | VQRDMLAHT.S32 Qda, Qn, Rm | | |
| int32_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqdmlashq[_n_s8]( | m1 -> Qda | VQDMLASH.S8 Qda, Qn, Rm | Qda -> result | |
| int8x16_t m1, | m2 -> Qn | | | |
| int8x16_t m2, | add -> Rm | | | |
| int8_t add) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqdmlashq[_n_s16]( | m1 -> Qda | VQDMLASH.S16 Qda, Qn, Rm | Qda -> result | |
| int16x8_t m1, | m2 -> Qn | | | |
| int16x8_t m2, | add -> Rm | | | |
| int16_t add) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmlashq[_n_s32]( | m1 -> Qda | VQDMLASH.S32 Qda, Qn, Rm | Qda -> result | |
| int32x4_t m1, | m2 -> Qn | | | |
| int32x4_t m2, | add -> Rm | | | |
| int32_t add) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqdmlashq_m[_n_s8]( | m1 -> Qda | VMSR P0, Rp | Qda -> result | |
| int8x16_t m1, | m2 -> Qn | VPST | | |
| int8x16_t m2, | add -> Rm | VQDMLASHT.S8 Qda, Qn, Rm | | |
| int8_t add, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqdmlashq_m[_n_s16]( | m1 -> Qda | VMSR P0, Rp | Qda -> result | |
| int16x8_t m1, | m2 -> Qn | VPST | | |
| int16x8_t m2, | add -> Rm | VQDMLASHT.S16 Qda, Qn, Rm | | |
| int16_t add, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmlashq_m[_n_s32]( | m1 -> Qda | VMSR P0, Rp | Qda -> result | |
| int32x4_t m1, | m2 -> Qn | VPST | | |
| int32x4_t m2, | add -> Rm | VQDMLASHT.S32 Qda, Qn, Rm | | |
| int32_t add, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrdmlashq[_n_s8]( | m1 -> Qda | VQRDMLASH.S8 Qda, Qn, Rm | Qda -> result | |
| int8x16_t m1, | m2 -> Qn | | | |
| int8x16_t m2, | add -> Rm | | | |
| int8_t add) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrdmlashq[_n_s16]( | m1 -> Qda | VQRDMLASH.S16 Qda, Qn, Rm | Qda -> result | |
| int16x8_t m1, | m2 -> Qn | | | |
| int16x8_t m2, | add -> Rm | | | |
| int16_t add) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqrdmlashq[_n_s32]( | m1 -> Qda | VQRDMLASH.S32 Qda, Qn, Rm | Qda -> result | |
| int32x4_t m1, | m2 -> Qn | | | |
| int32x4_t m2, | add -> Rm | | | |
| int32_t add) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrdmlashq_m[_n_s8]( | m1 -> Qda | VMSR P0, Rp | Qda -> result | |
| int8x16_t m1, | m2 -> Qn | VPST | | |
| int8x16_t m2, | add -> Rm | VQRDMLASHT.S8 Qda, Qn, Rm | | |
| int8_t add, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrdmlashq_m[_n_s16]( | m1 -> Qda | VMSR P0, Rp | Qda -> result | |
| int16x8_t m1, | m2 -> Qn | VPST | | |
| int16x8_t m2, | add -> Rm | VQRDMLASHT.S16 Qda, Qn, Rm | | |
| int16_t add, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqrdmlashq_m[_n_s32]( | m1 -> Qda | VMSR P0, Rp | Qda -> result | |
| int32x4_t m1, | m2 -> Qn | VPST | | |
| int32x4_t m2, | add -> Rm | VQRDMLASHT.S32 Qda, Qn, Rm | | |
| int32_t add, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqdmlsdhq[_s8]( | inactive -> Qd | VQDMLSDH.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t inactive, | a -> Qn | | | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqdmlsdhq[_s16]( | inactive -> Qd | VQDMLSDH.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t inactive, | a -> Qn | | | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmlsdhq[_s32]( | inactive -> Qd | VQDMLSDH.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t inactive, | a -> Qn | | | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqdmlsdhq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VQDMLSDHT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqdmlsdhq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VQDMLSDHT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmlsdhq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VQDMLSDHT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqdmlsdhxq[_s8]( | inactive -> Qd | VQDMLSDHX.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t inactive, | a -> Qn | | | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqdmlsdhxq[_s16]( | inactive -> Qd | VQDMLSDHX.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t inactive, | a -> Qn | | | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmlsdhxq[_s32]( | inactive -> Qd | VQDMLSDHX.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t inactive, | a -> Qn | | | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqdmlsdhxq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VQDMLSDHXT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqdmlsdhxq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VQDMLSDHXT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmlsdhxq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VQDMLSDHXT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrdmlsdhq[_s8]( | inactive -> Qd | VQRDMLSDH.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t inactive, | a -> Qn | | | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrdmlsdhq[_s16]( | inactive -> Qd | VQRDMLSDH.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t inactive, | a -> Qn | | | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqrdmlsdhq[_s32]( | inactive -> Qd | VQRDMLSDH.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t inactive, | a -> Qn | | | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrdmlsdhq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VQRDMLSDHT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrdmlsdhq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VQRDMLSDHT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqrdmlsdhq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VQRDMLSDHT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrdmlsdhxq[_s8]( | inactive -> Qd | VQRDMLSDHX.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t inactive, | a -> Qn | | | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrdmlsdhxq[_s16]( | inactive -> Qd | VQRDMLSDHX.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t inactive, | a -> Qn | | | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqrdmlsdhxq[_s32]( | inactive -> Qd | VQRDMLSDHX.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t inactive, | a -> Qn | | | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrdmlsdhxq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VQRDMLSDHXT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrdmlsdhxq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VQRDMLSDHXT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqrdmlsdhxq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VQRDMLSDHXT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------+------------------------+--------------------------------+-------------------+---------------------------+
Saturating multiply
-------------------
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+============================================+========================+==============================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqdmulhq[_n_s8]( | a -> Qn | VQDMULH.S8 Qd, Qn, Rm | Qd -> result | |
| int8x16_t a, | b -> Rm | | | |
| int8_t b) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vqdmulhq[_n_s16]( | a -> Qn | VQDMULH.S16 Qd, Qn, Rm | Qd -> result | |
| int16x8_t a, | b -> Rm | | | |
| int16_t b) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vqdmulhq[_n_s32]( | a -> Qn | VQDMULH.S32 Qd, Qn, Rm | Qd -> result | |
| int32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqdmulhq_m[_n_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Rm | VQDMULHT.S8 Qd, Qn, Rm | | |
| int8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqdmulhq_m[_n_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Rm | VQDMULHT.S16 Qd, Qn, Rm | | |
| int16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmulhq_m[_n_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Rm | VQDMULHT.S32 Qd, Qn, Rm | | |
| int32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqdmulhq[_s8]( | a -> Qn | VQDMULH.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vqdmulhq[_s16]( | a -> Qn | VQDMULH.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vqdmulhq[_s32]( | a -> Qn | VQDMULH.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqdmulhq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VQDMULHT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqdmulhq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VQDMULHT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmulhq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VQDMULHT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrdmulhq[_n_s8]( | a -> Qn | VQRDMULH.S8 Qd, Qn, Rm | Qd -> result | |
| int8x16_t a, | b -> Rm | | | |
| int8_t b) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vqrdmulhq[_n_s16]( | a -> Qn | VQRDMULH.S16 Qd, Qn, Rm | Qd -> result | |
| int16x8_t a, | b -> Rm | | | |
| int16_t b) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vqrdmulhq[_n_s32]( | a -> Qn | VQRDMULH.S32 Qd, Qn, Rm | Qd -> result | |
| int32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrdmulhq_m[_n_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Rm | VQRDMULHT.S8 Qd, Qn, Rm | | |
| int8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrdmulhq_m[_n_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Rm | VQRDMULHT.S16 Qd, Qn, Rm | | |
| int16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqrdmulhq_m[_n_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Rm | VQRDMULHT.S32 Qd, Qn, Rm | | |
| int32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrdmulhq[_s8]( | a -> Qn | VQRDMULH.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vqrdmulhq[_s16]( | a -> Qn | VQRDMULH.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vqrdmulhq[_s32]( | a -> Qn | VQRDMULH.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrdmulhq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VQRDMULHT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrdmulhq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VQRDMULHT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqrdmulhq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VQRDMULHT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmullbq[_n_s16]( | a -> Qn | VQDMULLB.S16 Qd, Qn, Rm | Qd -> result | |
| int16x8_t a, | b -> Rm | | | |
| int16_t b) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vqdmullbq[_n_s32]( | a -> Qn | VQDMULLB.S32 Qd, Qn, Rm | Qd -> result | |
| int32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmullbq_m[_n_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Rm | VQDMULLBT.S16 Qd, Qn, Rm | | |
| int16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vqdmullbq_m[_n_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int64x2_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Rm | VQDMULLBT.S32 Qd, Qn, Rm | | |
| int32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmullbq[_s16]( | a -> Qn | VQDMULLB.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vqdmullbq[_s32]( | a -> Qn | VQDMULLB.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmullbq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VQDMULLBT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vqdmullbq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int64x2_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VQDMULLBT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmulltq[_n_s16]( | a -> Qn | VQDMULLT.S16 Qd, Qn, Rm | Qd -> result | |
| int16x8_t a, | b -> Rm | | | |
| int16_t b) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vqdmulltq[_n_s32]( | a -> Qn | VQDMULLT.S32 Qd, Qn, Rm | Qd -> result | |
| int32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmulltq_m[_n_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Rm | VQDMULLTT.S16 Qd, Qn, Rm | | |
| int16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vqdmulltq_m[_n_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int64x2_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Rm | VQDMULLTT.S32 Qd, Qn, Rm | | |
| int32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmulltq[_s16]( | a -> Qn | VQDMULLT.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vqdmulltq[_s32]( | a -> Qn | VQDMULLT.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqdmulltq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VQDMULLTT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vqdmulltq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int64x2_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VQDMULLTT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
Multiply-accumulate
-------------------
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+============================================+===========================+============================================+=============================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavaq[_s8]( | add -> Rda | VMLADAVA.S8 Rda, Qn, Qm | Rda -> result | |
| int32_t add, | m1 -> Qn | | | |
| int8x16_t m1, | m2 -> Qm | | | |
| int8x16_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavaq[_s16]( | add -> Rda | VMLADAVA.S16 Rda, Qn, Qm | Rda -> result | |
| int32_t add, | m1 -> Qn | | | |
| int16x8_t m1, | m2 -> Qm | | | |
| int16x8_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavaq[_s32]( | add -> Rda | VMLADAVA.S32 Rda, Qn, Qm | Rda -> result | |
| int32_t add, | m1 -> Qn | | | |
| int32x4_t m1, | m2 -> Qm | | | |
| int32x4_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vmladavaq[_u8]( | add -> Rda | VMLADAVA.U8 Rda, Qn, Qm | Rda -> result | |
| uint32_t add, | m1 -> Qn | | | |
| uint8x16_t m1, | m2 -> Qm | | | |
| uint8x16_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vmladavaq[_u16]( | add -> Rda | VMLADAVA.U16 Rda, Qn, Qm | Rda -> result | |
| uint32_t add, | m1 -> Qn | | | |
| uint16x8_t m1, | m2 -> Qm | | | |
| uint16x8_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vmladavaq[_u32]( | add -> Rda | VMLADAVA.U32 Rda, Qn, Qm | Rda -> result | |
| uint32_t add, | m1 -> Qn | | | |
| uint32x4_t m1, | m2 -> Qm | | | |
| uint32x4_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavaq_p[_s8]( | add -> Rda | VMSR P0, Rp | Rda -> result | |
| int32_t add, | m1 -> Qn | VPST | | |
| int8x16_t m1, | m2 -> Qm | VMLADAVAT.S8 Rda, Qn, Qm | | |
| int8x16_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavaq_p[_s16]( | add -> Rda | VMSR P0, Rp | Rda -> result | |
| int32_t add, | m1 -> Qn | VPST | | |
| int16x8_t m1, | m2 -> Qm | VMLADAVAT.S16 Rda, Qn, Qm | | |
| int16x8_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavaq_p[_s32]( | add -> Rda | VMSR P0, Rp | Rda -> result | |
| int32_t add, | m1 -> Qn | VPST | | |
| int32x4_t m1, | m2 -> Qm | VMLADAVAT.S32 Rda, Qn, Qm | | |
| int32x4_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vmladavaq_p[_u8]( | add -> Rda | VMSR P0, Rp | Rda -> result | |
| uint32_t add, | m1 -> Qn | VPST | | |
| uint8x16_t m1, | m2 -> Qm | VMLADAVAT.U8 Rda, Qn, Qm | | |
| uint8x16_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vmladavaq_p[_u16]( | add -> Rda | VMSR P0, Rp | Rda -> result | |
| uint32_t add, | m1 -> Qn | VPST | | |
| uint16x8_t m1, | m2 -> Qm | VMLADAVAT.U16 Rda, Qn, Qm | | |
| uint16x8_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vmladavaq_p[_u32]( | add -> Rda | VMSR P0, Rp | Rda -> result | |
| uint32_t add, | m1 -> Qn | VPST | | |
| uint32x4_t m1, | m2 -> Qm | VMLADAVAT.U32 Rda, Qn, Qm | | |
| uint32x4_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavq[_s8]( | m1 -> Qn | VMLADAV.S8 Rda, Qn, Qm | Rda -> result | |
| int8x16_t m1, | m2 -> Qm | | | |
| int8x16_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavq[_s16]( | m1 -> Qn | VMLADAV.S16 Rda, Qn, Qm | Rda -> result | |
| int16x8_t m1, | m2 -> Qm | | | |
| int16x8_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavq[_s32]( | m1 -> Qn | VMLADAV.S32 Rda, Qn, Qm | Rda -> result | |
| int32x4_t m1, | m2 -> Qm | | | |
| int32x4_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vmladavq[_u8]( | m1 -> Qn | VMLADAV.U8 Rda, Qn, Qm | Rda -> result | |
| uint8x16_t m1, | m2 -> Qm | | | |
| uint8x16_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vmladavq[_u16]( | m1 -> Qn | VMLADAV.U16 Rda, Qn, Qm | Rda -> result | |
| uint16x8_t m1, | m2 -> Qm | | | |
| uint16x8_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vmladavq[_u32]( | m1 -> Qn | VMLADAV.U32 Rda, Qn, Qm | Rda -> result | |
| uint32x4_t m1, | m2 -> Qm | | | |
| uint32x4_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavq_p[_s8]( | m1 -> Qn | VMSR P0, Rp | Rda -> result | |
| int8x16_t m1, | m2 -> Qm | VPST | | |
| int8x16_t m2, | p -> Rp | VMLADAVT.S8 Rda, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavq_p[_s16]( | m1 -> Qn | VMSR P0, Rp | Rda -> result | |
| int16x8_t m1, | m2 -> Qm | VPST | | |
| int16x8_t m2, | p -> Rp | VMLADAVT.S16 Rda, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavq_p[_s32]( | m1 -> Qn | VMSR P0, Rp | Rda -> result | |
| int32x4_t m1, | m2 -> Qm | VPST | | |
| int32x4_t m2, | p -> Rp | VMLADAVT.S32 Rda, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vmladavq_p[_u8]( | m1 -> Qn | VMSR P0, Rp | Rda -> result | |
| uint8x16_t m1, | m2 -> Qm | VPST | | |
| uint8x16_t m2, | p -> Rp | VMLADAVT.U8 Rda, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vmladavq_p[_u16]( | m1 -> Qn | VMSR P0, Rp | Rda -> result | |
| uint16x8_t m1, | m2 -> Qm | VPST | | |
| uint16x8_t m2, | p -> Rp | VMLADAVT.U16 Rda, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]vmladavq_p[_u32]( | m1 -> Qn | VMSR P0, Rp | Rda -> result | |
| uint32x4_t m1, | m2 -> Qm | VPST | | |
| uint32x4_t m2, | p -> Rp | VMLADAVT.U32 Rda, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavaxq[_s8]( | add -> Rda | VMLADAVAX.S8 Rda, Qn, Qm | Rda -> result | |
| int32_t add, | m1 -> Qn | | | |
| int8x16_t m1, | m2 -> Qm | | | |
| int8x16_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavaxq[_s16]( | add -> Rda | VMLADAVAX.S16 Rda, Qn, Qm | Rda -> result | |
| int32_t add, | m1 -> Qn | | | |
| int16x8_t m1, | m2 -> Qm | | | |
| int16x8_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavaxq[_s32]( | add -> Rda | VMLADAVAX.S32 Rda, Qn, Qm | Rda -> result | |
| int32_t add, | m1 -> Qn | | | |
| int32x4_t m1, | m2 -> Qm | | | |
| int32x4_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavaxq_p[_s8]( | add -> Rda | VMSR P0, Rp | Rda -> result | |
| int32_t add, | m1 -> Qn | VPST | | |
| int8x16_t m1, | m2 -> Qm | VMLADAVAXT.S8 Rda, Qn, Qm | | |
| int8x16_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavaxq_p[_s16]( | add -> Rda | VMSR P0, Rp | Rda -> result | |
| int32_t add, | m1 -> Qn | VPST | | |
| int16x8_t m1, | m2 -> Qm | VMLADAVAXT.S16 Rda, Qn, Qm | | |
| int16x8_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavaxq_p[_s32]( | add -> Rda | VMSR P0, Rp | Rda -> result | |
| int32_t add, | m1 -> Qn | VPST | | |
| int32x4_t m1, | m2 -> Qm | VMLADAVAXT.S32 Rda, Qn, Qm | | |
| int32x4_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavxq[_s8]( | m1 -> Qn | VMLADAVX.S8 Rda, Qn, Qm | Rda -> result | |
| int8x16_t m1, | m2 -> Qm | | | |
| int8x16_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavxq[_s16]( | m1 -> Qn | VMLADAVX.S16 Rda, Qn, Qm | Rda -> result | |
| int16x8_t m1, | m2 -> Qm | | | |
| int16x8_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavxq[_s32]( | m1 -> Qn | VMLADAVX.S32 Rda, Qn, Qm | Rda -> result | |
| int32x4_t m1, | m2 -> Qm | | | |
| int32x4_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavxq_p[_s8]( | m1 -> Qn | VMSR P0, Rp | Rda -> result | |
| int8x16_t m1, | m2 -> Qm | VPST | | |
| int8x16_t m2, | p -> Rp | VMLADAVXT.S8 Rda, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavxq_p[_s16]( | m1 -> Qn | VMSR P0, Rp | Rda -> result | |
| int16x8_t m1, | m2 -> Qm | VPST | | |
| int16x8_t m2, | p -> Rp | VMLADAVXT.S16 Rda, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmladavxq_p[_s32]( | m1 -> Qn | VMSR P0, Rp | Rda -> result | |
| int32x4_t m1, | m2 -> Qm | VPST | | |
| int32x4_t m2, | p -> Rp | VMLADAVXT.S32 Rda, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlaldavaq[_s16]( | add -> [RdaHi,RdaLo] | VMLALDAVA.S16 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int64_t add, | m1 -> Qn | | | |
| int16x8_t m1, | m2 -> Qm | | | |
| int16x8_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlaldavaq[_s32]( | add -> [RdaHi,RdaLo] | VMLALDAVA.S32 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int64_t add, | m1 -> Qn | | | |
| int32x4_t m1, | m2 -> Qm | | | |
| int32x4_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]vmlaldavaq[_u16]( | add -> [RdaHi,RdaLo] | VMLALDAVA.U16 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| uint64_t add, | m1 -> Qn | | | |
| uint16x8_t m1, | m2 -> Qm | | | |
| uint16x8_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]vmlaldavaq[_u32]( | add -> [RdaHi,RdaLo] | VMLALDAVA.U32 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| uint64_t add, | m1 -> Qn | | | |
| uint32x4_t m1, | m2 -> Qm | | | |
| uint32x4_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlaldavaq_p[_s16]( | add -> [RdaHi,RdaLo] | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int64_t add, | m1 -> Qn | VPST | | |
| int16x8_t m1, | m2 -> Qm | VMLALDAVAT.S16 RdaLo, RdaHi, Qn, Qm | | |
| int16x8_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlaldavaq_p[_s32]( | add -> [RdaHi,RdaLo] | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int64_t add, | m1 -> Qn | VPST | | |
| int32x4_t m1, | m2 -> Qm | VMLALDAVAT.S32 RdaLo, RdaHi, Qn, Qm | | |
| int32x4_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]vmlaldavaq_p[_u16]( | add -> [RdaHi,RdaLo] | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| uint64_t add, | m1 -> Qn | VPST | | |
| uint16x8_t m1, | m2 -> Qm | VMLALDAVAT.U16 RdaLo, RdaHi, Qn, Qm | | |
| uint16x8_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]vmlaldavaq_p[_u32]( | add -> [RdaHi,RdaLo] | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| uint64_t add, | m1 -> Qn | VPST | | |
| uint32x4_t m1, | m2 -> Qm | VMLALDAVAT.U32 RdaLo, RdaHi, Qn, Qm | | |
| uint32x4_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlaldavq[_s16]( | m1 -> Qn | VMLALDAV.S16 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int16x8_t m1, | m2 -> Qm | | | |
| int16x8_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlaldavq[_s32]( | m1 -> Qn | VMLALDAV.S32 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int32x4_t m1, | m2 -> Qm | | | |
| int32x4_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]vmlaldavq[_u16]( | m1 -> Qn | VMLALDAV.U16 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| uint16x8_t m1, | m2 -> Qm | | | |
| uint16x8_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]vmlaldavq[_u32]( | m1 -> Qn | VMLALDAV.U32 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| uint32x4_t m1, | m2 -> Qm | | | |
| uint32x4_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlaldavq_p[_s16]( | m1 -> Qn | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int16x8_t m1, | m2 -> Qm | VPST | | |
| int16x8_t m2, | p -> Rp | VMLALDAVT.S16 RdaLo, RdaHi, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlaldavq_p[_s32]( | m1 -> Qn | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int32x4_t m1, | m2 -> Qm | VPST | | |
| int32x4_t m2, | p -> Rp | VMLALDAVT.S32 RdaLo, RdaHi, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]vmlaldavq_p[_u16]( | m1 -> Qn | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| uint16x8_t m1, | m2 -> Qm | VPST | | |
| uint16x8_t m2, | p -> Rp | VMLALDAVT.U16 RdaLo, RdaHi, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]vmlaldavq_p[_u32]( | m1 -> Qn | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| uint32x4_t m1, | m2 -> Qm | VPST | | |
| uint32x4_t m2, | p -> Rp | VMLALDAVT.U32 RdaLo, RdaHi, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlaldavaxq[_s16]( | add -> [RdaHi,RdaLo] | VMLALDAVAX.S16 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int64_t add, | m1 -> Qn | | | |
| int16x8_t m1, | m2 -> Qm | | | |
| int16x8_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlaldavaxq[_s32]( | add -> [RdaHi,RdaLo] | VMLALDAVAX.S32 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int64_t add, | m1 -> Qn | | | |
| int32x4_t m1, | m2 -> Qm | | | |
| int32x4_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlaldavaxq_p[_s16]( | add -> [RdaHi,RdaLo] | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int64_t add, | m1 -> Qn | VPST | | |
| int16x8_t m1, | m2 -> Qm | VMLALDAVAXT.S16 RdaLo, RdaHi, Qn, Qm | | |
| int16x8_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlaldavaxq_p[_s32]( | add -> [RdaHi,RdaLo] | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int64_t add, | m1 -> Qn | VPST | | |
| int32x4_t m1, | m2 -> Qm | VMLALDAVAXT.S32 RdaLo, RdaHi, Qn, Qm | | |
| int32x4_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlaldavxq[_s16]( | m1 -> Qn | VMLALDAVX.S16 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int16x8_t m1, | m2 -> Qm | | | |
| int16x8_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlaldavxq[_s32]( | m1 -> Qn | VMLALDAVX.S32 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int32x4_t m1, | m2 -> Qm | | | |
| int32x4_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlaldavxq_p[_s16]( | m1 -> Qn | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int16x8_t m1, | m2 -> Qm | VPST | | |
| int16x8_t m2, | p -> Rp | VMLALDAVXT.S16 RdaLo, RdaHi, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlaldavxq_p[_s32]( | m1 -> Qn | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int32x4_t m1, | m2 -> Qm | VPST | | |
| int32x4_t m2, | p -> Rp | VMLALDAVXT.S32 RdaLo, RdaHi, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vmlaq[_n_s8]( | add -> Qda | VMLA.S8 Qda, Qn, Rm | Qda -> result | |
| int8x16_t add, | m1 -> Qn | | | |
| int8x16_t m1, | m2 -> Rm | | | |
| int8_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmlaq[_n_s16]( | add -> Qda | VMLA.S16 Qda, Qn, Rm | Qda -> result | |
| int16x8_t add, | m1 -> Qn | | | |
| int16x8_t m1, | m2 -> Rm | | | |
| int16_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmlaq[_n_s32]( | add -> Qda | VMLA.S32 Qda, Qn, Rm | Qda -> result | |
| int32x4_t add, | m1 -> Qn | | | |
| int32x4_t m1, | m2 -> Rm | | | |
| int32_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmlaq[_n_u8]( | add -> Qda | VMLA.U8 Qda, Qn, Rm | Qda -> result | |
| uint8x16_t add, | m1 -> Qn | | | |
| uint8x16_t m1, | m2 -> Rm | | | |
| uint8_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmlaq[_n_u16]( | add -> Qda | VMLA.U16 Qda, Qn, Rm | Qda -> result | |
| uint16x8_t add, | m1 -> Qn | | | |
| uint16x8_t m1, | m2 -> Rm | | | |
| uint16_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmlaq[_n_u32]( | add -> Qda | VMLA.U32 Qda, Qn, Rm | Qda -> result | |
| uint32x4_t add, | m1 -> Qn | | | |
| uint32x4_t m1, | m2 -> Rm | | | |
| uint32_t m2) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vmlaq_m[_n_s8]( | add -> Qda | VMSR P0, Rp | Qda -> result | |
| int8x16_t add, | m1 -> Qn | VPST | | |
| int8x16_t m1, | m2 -> Rm | VMLAT.S8 Qda, Qn, Rm | | |
| int8_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmlaq_m[_n_s16]( | add -> Qda | VMSR P0, Rp | Qda -> result | |
| int16x8_t add, | m1 -> Qn | VPST | | |
| int16x8_t m1, | m2 -> Rm | VMLAT.S16 Qda, Qn, Rm | | |
| int16_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmlaq_m[_n_s32]( | add -> Qda | VMSR P0, Rp | Qda -> result | |
| int32x4_t add, | m1 -> Qn | VPST | | |
| int32x4_t m1, | m2 -> Rm | VMLAT.S32 Qda, Qn, Rm | | |
| int32_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmlaq_m[_n_u8]( | add -> Qda | VMSR P0, Rp | Qda -> result | |
| uint8x16_t add, | m1 -> Qn | VPST | | |
| uint8x16_t m1, | m2 -> Rm | VMLAT.U8 Qda, Qn, Rm | | |
| uint8_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmlaq_m[_n_u16]( | add -> Qda | VMSR P0, Rp | Qda -> result | |
| uint16x8_t add, | m1 -> Qn | VPST | | |
| uint16x8_t m1, | m2 -> Rm | VMLAT.U16 Qda, Qn, Rm | | |
| uint16_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmlaq_m[_n_u32]( | add -> Qda | VMSR P0, Rp | Qda -> result | |
| uint32x4_t add, | m1 -> Qn | VPST | | |
| uint32x4_t m1, | m2 -> Rm | VMLAT.U32 Qda, Qn, Rm | | |
| uint32_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vmlasq[_n_s8]( | m1 -> Qda | VMLAS.S8 Qda, Qn, Rm | Qda -> result | |
| int8x16_t m1, | m2 -> Qn | | | |
| int8x16_t m2, | add -> Rm | | | |
| int8_t add) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmlasq[_n_s16]( | m1 -> Qda | VMLAS.S16 Qda, Qn, Rm | Qda -> result | |
| int16x8_t m1, | m2 -> Qn | | | |
| int16x8_t m2, | add -> Rm | | | |
| int16_t add) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmlasq[_n_s32]( | m1 -> Qda | VMLAS.S32 Qda, Qn, Rm | Qda -> result | |
| int32x4_t m1, | m2 -> Qn | | | |
| int32x4_t m2, | add -> Rm | | | |
| int32_t add) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmlasq[_n_u8]( | m1 -> Qda | VMLAS.U8 Qda, Qn, Rm | Qda -> result | |
| uint8x16_t m1, | m2 -> Qn | | | |
| uint8x16_t m2, | add -> Rm | | | |
| uint8_t add) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmlasq[_n_u16]( | m1 -> Qda | VMLAS.U16 Qda, Qn, Rm | Qda -> result | |
| uint16x8_t m1, | m2 -> Qn | | | |
| uint16x8_t m2, | add -> Rm | | | |
| uint16_t add) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmlasq[_n_u32]( | m1 -> Qda | VMLAS.U32 Qda, Qn, Rm | Qda -> result | |
| uint32x4_t m1, | m2 -> Qn | | | |
| uint32x4_t m2, | add -> Rm | | | |
| uint32_t add) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vmlasq_m[_n_s8]( | m1 -> Qda | VMSR P0, Rp | Qda -> result | |
| int8x16_t m1, | m2 -> Qn | VPST | | |
| int8x16_t m2, | add -> Rm | VMLAST.S8 Qda, Qn, Rm | | |
| int8_t add, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmlasq_m[_n_s16]( | m1 -> Qda | VMSR P0, Rp | Qda -> result | |
| int16x8_t m1, | m2 -> Qn | VPST | | |
| int16x8_t m2, | add -> Rm | VMLAST.S16 Qda, Qn, Rm | | |
| int16_t add, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmlasq_m[_n_s32]( | m1 -> Qda | VMSR P0, Rp | Qda -> result | |
| int32x4_t m1, | m2 -> Qn | VPST | | |
| int32x4_t m2, | add -> Rm | VMLAST.S32 Qda, Qn, Rm | | |
| int32_t add, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmlasq_m[_n_u8]( | m1 -> Qda | VMSR P0, Rp | Qda -> result | |
| uint8x16_t m1, | m2 -> Qn | VPST | | |
| uint8x16_t m2, | add -> Rm | VMLAST.U8 Qda, Qn, Rm | | |
| uint8_t add, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmlasq_m[_n_u16]( | m1 -> Qda | VMSR P0, Rp | Qda -> result | |
| uint16x8_t m1, | m2 -> Qn | VPST | | |
| uint16x8_t m2, | add -> Rm | VMLAST.U16 Qda, Qn, Rm | | |
| uint16_t add, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmlasq_m[_n_u32]( | m1 -> Qda | VMSR P0, Rp | Qda -> result | |
| uint32x4_t m1, | m2 -> Qn | VPST | | |
| uint32x4_t m2, | add -> Rm | VMLAST.U32 Qda, Qn, Rm | | |
| uint32_t add, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavaq[_s8]( | a -> Rda | VMLSDAVA.S8 Rda, Qn, Qm | Rda -> result | |
| int32_t a, | b -> Qn | | | |
| int8x16_t b, | c -> Qm | | | |
| int8x16_t c) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavaq[_s16]( | a -> Rda | VMLSDAVA.S16 Rda, Qn, Qm | Rda -> result | |
| int32_t a, | b -> Qn | | | |
| int16x8_t b, | c -> Qm | | | |
| int16x8_t c) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavaq[_s32]( | a -> Rda | VMLSDAVA.S32 Rda, Qn, Qm | Rda -> result | |
| int32_t a, | b -> Qn | | | |
| int32x4_t b, | c -> Qm | | | |
| int32x4_t c) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavaq_p[_s8]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| int32_t a, | b -> Qn | VPST | | |
| int8x16_t b, | c -> Qm | VMLSDAVAT.S8 Rda, Qn, Qm | | |
| int8x16_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavaq_p[_s16]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| int32_t a, | b -> Qn | VPST | | |
| int16x8_t b, | c -> Qm | VMLSDAVAT.S16 Rda, Qn, Qm | | |
| int16x8_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavaq_p[_s32]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| int32_t a, | b -> Qn | VPST | | |
| int32x4_t b, | c -> Qm | VMLSDAVAT.S32 Rda, Qn, Qm | | |
| int32x4_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavq[_s8]( | a -> Qn | VMLSDAV.S8 Rda, Qn, Qm | Rda -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavq[_s16]( | a -> Qn | VMLSDAV.S16 Rda, Qn, Qm | Rda -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavq[_s32]( | a -> Qn | VMLSDAV.S32 Rda, Qn, Qm | Rda -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavq_p[_s8]( | a -> Qn | VMSR P0, Rp | Rda -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VMLSDAVT.S8 Rda, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavq_p[_s16]( | a -> Qn | VMSR P0, Rp | Rda -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VMLSDAVT.S16 Rda, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavq_p[_s32]( | a -> Qn | VMSR P0, Rp | Rda -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VMLSDAVT.S32 Rda, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavaxq[_s8]( | a -> Rda | VMLSDAVAX.S8 Rda, Qn, Qm | Rda -> result | |
| int32_t a, | b -> Qn | | | |
| int8x16_t b, | c -> Qm | | | |
| int8x16_t c) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavaxq[_s16]( | a -> Rda | VMLSDAVAX.S16 Rda, Qn, Qm | Rda -> result | |
| int32_t a, | b -> Qn | | | |
| int16x8_t b, | c -> Qm | | | |
| int16x8_t c) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavaxq[_s32]( | a -> Rda | VMLSDAVAX.S32 Rda, Qn, Qm | Rda -> result | |
| int32_t a, | b -> Qn | | | |
| int32x4_t b, | c -> Qm | | | |
| int32x4_t c) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavaxq_p[_s8]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| int32_t a, | b -> Qn | VPST | | |
| int8x16_t b, | c -> Qm | VMLSDAVAXT.S8 Rda, Qn, Qm | | |
| int8x16_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavaxq_p[_s16]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| int32_t a, | b -> Qn | VPST | | |
| int16x8_t b, | c -> Qm | VMLSDAVAXT.S16 Rda, Qn, Qm | | |
| int16x8_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavaxq_p[_s32]( | a -> Rda | VMSR P0, Rp | Rda -> result | |
| int32_t a, | b -> Qn | VPST | | |
| int32x4_t b, | c -> Qm | VMLSDAVAXT.S32 Rda, Qn, Qm | | |
| int32x4_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavxq[_s8]( | a -> Qn | VMLSDAVX.S8 Rda, Qn, Qm | Rda -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavxq[_s16]( | a -> Qn | VMLSDAVX.S16 Rda, Qn, Qm | Rda -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavxq[_s32]( | a -> Qn | VMLSDAVX.S32 Rda, Qn, Qm | Rda -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavxq_p[_s8]( | a -> Qn | VMSR P0, Rp | Rda -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VMLSDAVXT.S8 Rda, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavxq_p[_s16]( | a -> Qn | VMSR P0, Rp | Rda -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VMLSDAVXT.S16 Rda, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]vmlsdavxq_p[_s32]( | a -> Qn | VMSR P0, Rp | Rda -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VMLSDAVXT.S32 Rda, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlsldavaq[_s16]( | a -> [RdaHi,RdaLo] | VMLSLDAVA.S16 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int64_t a, | b -> Qn | | | |
| int16x8_t b, | c -> Qm | | | |
| int16x8_t c) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlsldavaq[_s32]( | a -> [RdaHi,RdaLo] | VMLSLDAVA.S32 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int64_t a, | b -> Qn | | | |
| int32x4_t b, | c -> Qm | | | |
| int32x4_t c) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlsldavaq_p[_s16]( | a -> [RdaHi,RdaLo] | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int64_t a, | b -> Qn | VPST | | |
| int16x8_t b, | c -> Qm | VMLSLDAVAT.S16 RdaLo, RdaHi, Qn, Qm | | |
| int16x8_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlsldavaq_p[_s32]( | a -> [RdaHi,RdaLo] | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int64_t a, | b -> Qn | VPST | | |
| int32x4_t b, | c -> Qm | VMLSLDAVAT.S32 RdaLo, RdaHi, Qn, Qm | | |
| int32x4_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlsldavq[_s16]( | a -> Qn | VMLSLDAV.S16 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlsldavq[_s32]( | a -> Qn | VMLSLDAV.S32 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlsldavq_p[_s16]( | a -> Qn | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VMLSLDAVT.S16 RdaLo, RdaHi, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlsldavq_p[_s32]( | a -> Qn | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VMLSLDAVT.S32 RdaLo, RdaHi, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlsldavaxq[_s16]( | a -> [RdaHi,RdaLo] | VMLSLDAVAX.S16 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int64_t a, | b -> Qn | | | |
| int16x8_t b, | c -> Qm | | | |
| int16x8_t c) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlsldavaxq[_s32]( | a -> [RdaHi,RdaLo] | VMLSLDAVAX.S32 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int64_t a, | b -> Qn | | | |
| int32x4_t b, | c -> Qm | | | |
| int32x4_t c) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlsldavaxq_p[_s16]( | a -> [RdaHi,RdaLo] | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int64_t a, | b -> Qn | VPST | | |
| int16x8_t b, | c -> Qm | VMLSLDAVAXT.S16 RdaLo, RdaHi, Qn, Qm | | |
| int16x8_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlsldavaxq_p[_s32]( | a -> [RdaHi,RdaLo] | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int64_t a, | b -> Qn | VPST | | |
| int32x4_t b, | c -> Qm | VMLSLDAVAXT.S32 RdaLo, RdaHi, Qn, Qm | | |
| int32x4_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlsldavxq[_s16]( | a -> Qn | VMLSLDAVX.S16 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlsldavxq[_s32]( | a -> Qn | VMLSLDAVX.S32 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlsldavxq_p[_s16]( | a -> Qn | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VMLSLDAVXT.S16 RdaLo, RdaHi, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vmlsldavxq_p[_s32]( | a -> Qn | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VMLSLDAVXT.S32 RdaLo, RdaHi, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vrmlaldavhaq[_s32]( | a -> [RdaHi,RdaLo] | VRMLALDAVHA.S32 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int64_t a, | b -> Qn | | | |
| int32x4_t b, | c -> Qm | | | |
| int32x4_t c) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]vrmlaldavhaq[_u32]( | a -> [RdaHi,RdaLo] | VRMLALDAVHA.U32 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| uint64_t a, | b -> Qn | | | |
| uint32x4_t b, | c -> Qm | | | |
| uint32x4_t c) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vrmlaldavhaq_p[_s32]( | a -> [RdaHi,RdaLo] | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int64_t a, | b -> Qn | VPST | | |
| int32x4_t b, | c -> Qm | VRMLALDAVHAT.S32 RdaLo, RdaHi, Qn, Qm | | |
| int32x4_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]vrmlaldavhaq_p[_u32]( | a -> [RdaHi,RdaLo] | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| uint64_t a, | b -> Qn | VPST | | |
| uint32x4_t b, | c -> Qm | VRMLALDAVHAT.U32 RdaLo, RdaHi, Qn, Qm | | |
| uint32x4_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vrmlaldavhq[_s32]( | a -> Qn | VRMLALDAVH.S32 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]vrmlaldavhq[_u32]( | a -> Qn | VRMLALDAVH.U32 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vrmlaldavhq_p[_s32]( | a -> Qn | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VRMLALDAVHT.S32 RdaLo, RdaHi, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]vrmlaldavhq_p[_u32]( | a -> Qn | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VRMLALDAVHT.U32 RdaLo, RdaHi, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vrmlaldavhaxq[_s32]( | a -> [RdaHi,RdaLo] | VRMLALDAVHAX.S32 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int64_t a, | b -> Qn | | | |
| int32x4_t b, | c -> Qm | | | |
| int32x4_t c) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vrmlaldavhaxq_p[_s32]( | a -> [RdaHi,RdaLo] | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int64_t a, | b -> Qn | VPST | | |
| int32x4_t b, | c -> Qm | VRMLALDAVHAXT.S32 RdaLo, RdaHi, Qn, Qm | | |
| int32x4_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vrmlaldavhxq[_s32]( | a -> Qn | VRMLALDAVHX.S32 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vrmlaldavhxq_p[_s32]( | a -> Qn | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VRMLALDAVHXT.S32 RdaLo, RdaHi, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vrmlsldavhaq[_s32]( | a -> [RdaHi,RdaLo] | VRMLSLDAVHA.S32 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int64_t a, | b -> Qn | | | |
| int32x4_t b, | c -> Qm | | | |
| int32x4_t c) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vrmlsldavhaq_p[_s32]( | a -> [RdaHi,RdaLo] | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int64_t a, | b -> Qn | VPST | | |
| int32x4_t b, | c -> Qm | VRMLSLDAVHAT.S32 RdaLo, RdaHi, Qn, Qm | | |
| int32x4_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vrmlsldavhq[_s32]( | a -> Qn | VRMLSLDAVH.S32 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vrmlsldavhq_p[_s32]( | a -> Qn | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VRMLSLDAVHT.S32 RdaLo, RdaHi, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vrmlsldavhaxq[_s32]( | a -> [RdaHi,RdaLo] | VRMLSLDAVHAX.S32 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int64_t a, | b -> Qn | | | |
| int32x4_t b, | c -> Qm | | | |
| int32x4_t c) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vrmlsldavhaxq_p[_s32]( | a -> [RdaHi,RdaLo] | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int64_t a, | b -> Qn | VPST | | |
| int32x4_t b, | c -> Qm | VRMLSLDAVHAXT.S32 RdaLo, RdaHi, Qn, Qm | | |
| int32x4_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vrmlsldavhxq[_s32]( | a -> Qn | VRMLSLDAVHX.S32 RdaLo, RdaHi, Qn, Qm | [RdaHi,RdaLo] -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]vrmlsldavhxq_p[_s32]( | a -> Qn | VMSR P0, Rp | [RdaHi,RdaLo] -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VRMLSLDAVHXT.S32 RdaLo, RdaHi, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+---------------------------+--------------------------------------------+-----------------------------+---------------------------+
Fused multiply-accumulate
-------------------------
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+===========================================+========================+============================+===================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vfmaq[_n_f16]( | add -> Qda | VFMA.F16 Qda, Qn, Rm | Qda -> result | |
| float16x8_t add, | m1 -> Qn | | | |
| float16x8_t m1, | m2 -> Rm | | | |
| float16_t m2) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vfmaq[_n_f32]( | add -> Qda | VFMA.F32 Qda, Qn, Rm | Qda -> result | |
| float32x4_t add, | m1 -> Qn | | | |
| float32x4_t m1, | m2 -> Rm | | | |
| float32_t m2) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vfmaq_m[_n_f16]( | add -> Qda | VMSR P0, Rp | Qda -> result | |
| float16x8_t add, | m1 -> Qn | VPST | | |
| float16x8_t m1, | m2 -> Rm | VFMAT.F16 Qda, Qn, Rm | | |
| float16_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vfmaq_m[_n_f32]( | add -> Qda | VMSR P0, Rp | Qda -> result | |
| float32x4_t add, | m1 -> Qn | VPST | | |
| float32x4_t m1, | m2 -> Rm | VFMAT.F32 Qda, Qn, Rm | | |
| float32_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vfmaq[_f16]( | add -> Qda | VFMA.F16 Qda, Qn, Qm | Qda -> result | |
| float16x8_t add, | m1 -> Qn | | | |
| float16x8_t m1, | m2 -> Qm | | | |
| float16x8_t m2) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vfmaq[_f32]( | add -> Qda | VFMA.F32 Qda, Qn, Qm | Qda -> result | |
| float32x4_t add, | m1 -> Qn | | | |
| float32x4_t m1, | m2 -> Qm | | | |
| float32x4_t m2) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vfmaq_m[_f16]( | add -> Qda | VMSR P0, Rp | Qda -> result | |
| float16x8_t add, | m1 -> Qn | VPST | | |
| float16x8_t m1, | m2 -> Qm | VFMAT.F16 Qda, Qn, Qm | | |
| float16x8_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vfmaq_m[_f32]( | add -> Qda | VMSR P0, Rp | Qda -> result | |
| float32x4_t add, | m1 -> Qn | VPST | | |
| float32x4_t m1, | m2 -> Qm | VFMAT.F32 Qda, Qn, Qm | | |
| float32x4_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vfmasq[_n_f16]( | m1 -> Qda | VFMAS.F16 Qda, Qn, Rm | Qda -> result | |
| float16x8_t m1, | m2 -> Qn | | | |
| float16x8_t m2, | add -> Rm | | | |
| float16_t add) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vfmasq[_n_f32]( | m1 -> Qda | VFMAS.F32 Qda, Qn, Rm | Qda -> result | |
| float32x4_t m1, | m2 -> Qn | | | |
| float32x4_t m2, | add -> Rm | | | |
| float32_t add) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vfmasq_m[_n_f16]( | m1 -> Qda | VMSR P0, Rp | Qda -> result | |
| float16x8_t m1, | m2 -> Qn | VPST | | |
| float16x8_t m2, | add -> Rm | VFMAST.F16 Qda, Qn, Rm | | |
| float16_t add, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vfmasq_m[_n_f32]( | m1 -> Qda | VMSR P0, Rp | Qda -> result | |
| float32x4_t m1, | m2 -> Qn | VPST | | |
| float32x4_t m2, | add -> Rm | VFMAST.F32 Qda, Qn, Rm | | |
| float32_t add, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vfmsq[_f16]( | add -> Qda | VFMS.F16 Qda, Qn, Qm | Qda -> result | |
| float16x8_t add, | m1 -> Qn | | | |
| float16x8_t m1, | m2 -> Qm | | | |
| float16x8_t m2) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vfmsq[_f32]( | add -> Qda | VFMS.F32 Qda, Qn, Qm | Qda -> result | |
| float32x4_t add, | m1 -> Qn | | | |
| float32x4_t m1, | m2 -> Qm | | | |
| float32x4_t m2) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vfmsq_m[_f16]( | add -> Qda | VMSR P0, Rp | Qda -> result | |
| float16x8_t add, | m1 -> Qn | VPST | | |
| float16x8_t m1, | m2 -> Qm | VFMST.F16 Qda, Qn, Qm | | |
| float16x8_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vfmsq_m[_f32]( | add -> Qda | VMSR P0, Rp | Qda -> result | |
| float32x4_t add, | m1 -> Qn | VPST | | |
| float32x4_t m1, | m2 -> Qm | VFMST.F32 Qda, Qn, Qm | | |
| float32x4_t m2, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+----------------------------+-------------------+---------------------------+
Subtract
~~~~~~~~
Subtraction
-----------
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+==========================================+========================+============================+======================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vsbciq[_s32]( | a -> Qn | VSBCI.I32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | VMRS Rt, FPSCR_nzcvqc | Rt -> *carry_out | |
| int32x4_t b, | | LSR Rt, #29 | | |
| unsigned *carry_out) | | AND Rt, #1 | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vsbciq[_u32]( | a -> Qn | VSBCI.I32 Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | VMRS Rt, FPSCR_nzcvqc | Rt -> *carry_out | |
| uint32x4_t b, | | LSR Rt, #29 | | |
| unsigned *carry_out) | | AND Rt, #1 | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vsbciq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | Rt -> *carry_out | |
| int32x4_t a, | b -> Qm | VSBCIT.I32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | VMRS Rt, FPSCR_nzcvqc | | |
| unsigned *carry_out, | | LSR Rt, #29 | | |
| mve_pred16_t p) | | AND Rt, #1 | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vsbciq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | Rt -> *carry_out | |
| uint32x4_t a, | b -> Qm | VSBCIT.I32 Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | VMRS Rt, FPSCR_nzcvqc | | |
| unsigned *carry_out, | | LSR Rt, #29 | | |
| mve_pred16_t p) | | AND Rt, #1 | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vsbcq[_s32]( | a -> Qn | VMRS Rs, FPSCR_nzcvqc | Qd -> result | |
| int32x4_t a, | b -> Qm | BFI Rs, Rt, #29, #1 | Rt -> *carry | |
| int32x4_t b, | *carry -> Rt | VMSR FPSCR_nzcvqc, Rs | | |
| unsigned *carry) | | VSBC.I32 Qd, Qn, Qm | | |
| | | VMRS Rt, FPSCR_nzcvqc | | |
| | | LSR Rt, #29 | | |
| | | AND Rt, #1 | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vsbcq[_u32]( | a -> Qn | VMRS Rs, FPSCR_nzcvqc | Qd -> result | |
| uint32x4_t a, | b -> Qm | BFI Rs, Rt, #29, #1 | Rt -> *carry | |
| uint32x4_t b, | *carry -> Rt | VMSR FPSCR_nzcvqc, Rs | | |
| unsigned *carry) | | VSBC.I32 Qd, Qn, Qm | | |
| | | VMRS Rt, FPSCR_nzcvqc | | |
| | | LSR Rt, #29 | | |
| | | AND Rt, #1 | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vsbcq_m[_s32]( | inactive -> Qd | VMRS Rs, FPSCR_nzcvqc | Qd -> result | |
| int32x4_t inactive, | a -> Qn | BFI Rs, Rt, #29, #1 | Rt -> *carry | |
| int32x4_t a, | b -> Qm | VMSR FPSCR_nzcvqc, Rs | | |
| int32x4_t b, | *carry -> Rt | VMSR P0, Rp | | |
| unsigned *carry, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VSBCT.I32 Qd, Qn, Qm | | |
| | | VMRS Rt, FPSCR_nzcvqc | | |
| | | LSR Rt, #29 | | |
| | | AND Rt, #1 | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vsbcq_m[_u32]( | inactive -> Qd | VMRS Rs, FPSCR_nzcvqc | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | BFI Rs, Rt, #29, #1 | Rt -> *carry | |
| uint32x4_t a, | b -> Qm | VMSR FPSCR_nzcvqc, Rs | | |
| uint32x4_t b, | *carry -> Rt | VMSR P0, Rp | | |
| unsigned *carry, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VSBCT.I32 Qd, Qn, Qm | | |
| | | VMRS Rt, FPSCR_nzcvqc | | |
| | | LSR Rt, #29 | | |
| | | AND Rt, #1 | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vsubq[_s8]( | a -> Qn | VSUB.I8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vsubq[_s16]( | a -> Qn | VSUB.I16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vsubq[_s32]( | a -> Qn | VSUB.I32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vsubq[_n_s8]( | a -> Qn | VSUB.I8 Qd, Qn, Rm | Qd -> result | |
| int8x16_t a, | b -> Rm | | | |
| int8_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vsubq[_n_s16]( | a -> Qn | VSUB.I16 Qd, Qn, Rm | Qd -> result | |
| int16x8_t a, | b -> Rm | | | |
| int16_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vsubq[_n_s32]( | a -> Qn | VSUB.I32 Qd, Qn, Rm | Qd -> result | |
| int32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vsubq[_u8]( | a -> Qn | VSUB.I8 Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vsubq[_u16]( | a -> Qn | VSUB.I16 Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vsubq[_u32]( | a -> Qn | VSUB.I32 Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vsubq[_n_u8]( | a -> Qn | VSUB.I8 Qd, Qn, Rm | Qd -> result | |
| uint8x16_t a, | b -> Rm | | | |
| uint8_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vsubq[_n_u16]( | a -> Qn | VSUB.I16 Qd, Qn, Rm | Qd -> result | |
| uint16x8_t a, | b -> Rm | | | |
| uint16_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vsubq[_n_u32]( | a -> Qn | VSUB.I32 Qd, Qn, Rm | Qd -> result | |
| uint32x4_t a, | b -> Rm | | | |
| uint32_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vsubq[_f16]( | a -> Qn | VSUB.F16 Qd, Qn, Qm | Qd -> result | |
| float16x8_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vsubq[_f32]( | a -> Qn | VSUB.F32 Qd, Qn, Qm | Qd -> result | |
| float32x4_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vsubq[_n_f16]( | a -> Qn | VSUB.F16 Qd, Qn, Rm | Qd -> result | |
| float16x8_t a, | b -> Rm | | | |
| float16_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vsubq[_n_f32]( | a -> Qn | VSUB.F32 Qd, Qn, Rm | Qd -> result | |
| float32x4_t a, | b -> Rm | | | |
| float32_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vsubq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VSUBT.I8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vsubq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VSUBT.I16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vsubq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VSUBT.I32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vsubq_m[_n_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Rm | VSUBT.I8 Qd, Qn, Rm | | |
| int8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vsubq_m[_n_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Rm | VSUBT.I16 Qd, Qn, Rm | | |
| int16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vsubq_m[_n_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Rm | VSUBT.I32 Qd, Qn, Rm | | |
| int32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vsubq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VSUBT.I8 Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vsubq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VSUBT.I16 Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vsubq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VSUBT.I32 Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vsubq_m[_n_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Rm | VSUBT.I8 Qd, Qn, Rm | | |
| uint8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vsubq_m[_n_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Rm | VSUBT.I16 Qd, Qn, Rm | | |
| uint16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vsubq_m[_n_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Rm | VSUBT.I32 Qd, Qn, Rm | | |
| uint32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vsubq_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Qm | VSUBT.F16 Qd, Qn, Qm | | |
| float16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vsubq_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Qm | VSUBT.F32 Qd, Qn, Qm | | |
| float32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vsubq_m[_n_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Rm | VSUBT.F16 Qd, Qn, Rm | | |
| float16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vsubq_m[_n_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Rm | VSUBT.F32 Qd, Qn, Rm | | |
| float32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vsubq_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VSUBT.I8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vsubq_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VSUBT.I16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vsubq_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VSUBT.I32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vsubq_x[_n_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Rm | VPST | | |
| int8_t b, | p -> Rp | VSUBT.I8 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vsubq_x[_n_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Rm | VPST | | |
| int16_t b, | p -> Rp | VSUBT.I16 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vsubq_x[_n_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VSUBT.I32 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vsubq_x[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VSUBT.I8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vsubq_x[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VSUBT.I16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vsubq_x[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VSUBT.I32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vsubq_x[_n_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Rm | VPST | | |
| uint8_t b, | p -> Rp | VSUBT.I8 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vsubq_x[_n_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Rm | VPST | | |
| uint16_t b, | p -> Rp | VSUBT.I16 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vsubq_x[_n_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Rm | VPST | | |
| uint32_t b, | p -> Rp | VSUBT.I32 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vsubq_x[_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VSUBT.F16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vsubq_x[_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VSUBT.F32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vsubq_x[_n_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Rm | VPST | | |
| float16_t b, | p -> Rp | VSUBT.F16 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vsubq_x[_n_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Rm | VPST | | |
| float32_t b, | p -> Rp | VSUBT.F32 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vhsubq[_n_s8]( | a -> Qn | VHSUB.S8 Qd, Qn, Rm | Qd -> result | |
| int8x16_t a, | b -> Rm | | | |
| int8_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vhsubq[_n_s16]( | a -> Qn | VHSUB.S16 Qd, Qn, Rm | Qd -> result | |
| int16x8_t a, | b -> Rm | | | |
| int16_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vhsubq[_n_s32]( | a -> Qn | VHSUB.S32 Qd, Qn, Rm | Qd -> result | |
| int32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vhsubq[_n_u8]( | a -> Qn | VHSUB.U8 Qd, Qn, Rm | Qd -> result | |
| uint8x16_t a, | b -> Rm | | | |
| uint8_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vhsubq[_n_u16]( | a -> Qn | VHSUB.U16 Qd, Qn, Rm | Qd -> result | |
| uint16x8_t a, | b -> Rm | | | |
| uint16_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vhsubq[_n_u32]( | a -> Qn | VHSUB.U32 Qd, Qn, Rm | Qd -> result | |
| uint32x4_t a, | b -> Rm | | | |
| uint32_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vhsubq[_s8]( | a -> Qn | VHSUB.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vhsubq[_s16]( | a -> Qn | VHSUB.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vhsubq[_s32]( | a -> Qn | VHSUB.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vhsubq[_u8]( | a -> Qn | VHSUB.U8 Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vhsubq[_u16]( | a -> Qn | VHSUB.U16 Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vhsubq[_u32]( | a -> Qn | VHSUB.U32 Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vhsubq_m[_n_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Rm | VHSUBT.S8 Qd, Qn, Rm | | |
| int8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vhsubq_m[_n_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Rm | VHSUBT.S16 Qd, Qn, Rm | | |
| int16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vhsubq_m[_n_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Rm | VHSUBT.S32 Qd, Qn, Rm | | |
| int32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vhsubq_m[_n_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Rm | VHSUBT.U8 Qd, Qn, Rm | | |
| uint8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vhsubq_m[_n_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Rm | VHSUBT.U16 Qd, Qn, Rm | | |
| uint16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vhsubq_m[_n_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Rm | VHSUBT.U32 Qd, Qn, Rm | | |
| uint32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vhsubq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VHSUBT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vhsubq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VHSUBT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vhsubq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VHSUBT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vhsubq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VHSUBT.U8 Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vhsubq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VHSUBT.U16 Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vhsubq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VHSUBT.U32 Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vhsubq_x[_n_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Rm | VPST | | |
| int8_t b, | p -> Rp | VHSUBT.S8 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vhsubq_x[_n_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Rm | VPST | | |
| int16_t b, | p -> Rp | VHSUBT.S16 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vhsubq_x[_n_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VHSUBT.S32 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vhsubq_x[_n_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Rm | VPST | | |
| uint8_t b, | p -> Rp | VHSUBT.U8 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vhsubq_x[_n_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Rm | VPST | | |
| uint16_t b, | p -> Rp | VHSUBT.U16 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vhsubq_x[_n_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Rm | VPST | | |
| uint32_t b, | p -> Rp | VHSUBT.U32 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vhsubq_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VHSUBT.S8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vhsubq_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VHSUBT.S16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vhsubq_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VHSUBT.S32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vhsubq_x[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VHSUBT.U8 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vhsubq_x[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VHSUBT.U16 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vhsubq_x[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VHSUBT.U32 Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+----------------------------+----------------------+---------------------------+
Saturating subtract
-------------------
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+==========================================+========================+===========================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqsubq[_n_s8]( | a -> Qn | VQSUB.S8 Qd, Qn, Rm | Qd -> result | |
| int8x16_t a, | b -> Rm | | | |
| int8_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqsubq[_n_s16]( | a -> Qn | VQSUB.S16 Qd, Qn, Rm | Qd -> result | |
| int16x8_t a, | b -> Rm | | | |
| int16_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqsubq[_n_s32]( | a -> Qn | VQSUB.S32 Qd, Qn, Rm | Qd -> result | |
| int32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqsubq[_n_u8]( | a -> Qn | VQSUB.U8 Qd, Qn, Rm | Qd -> result | |
| uint8x16_t a, | b -> Rm | | | |
| uint8_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqsubq[_n_u16]( | a -> Qn | VQSUB.U16 Qd, Qn, Rm | Qd -> result | |
| uint16x8_t a, | b -> Rm | | | |
| uint16_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vqsubq[_n_u32]( | a -> Qn | VQSUB.U32 Qd, Qn, Rm | Qd -> result | |
| uint32x4_t a, | b -> Rm | | | |
| uint32_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqsubq_m[_n_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Rm | VQSUBT.S8 Qd, Qn, Rm | | |
| int8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqsubq_m[_n_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Rm | VQSUBT.S16 Qd, Qn, Rm | | |
| int16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqsubq_m[_n_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Rm | VQSUBT.S32 Qd, Qn, Rm | | |
| int32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqsubq_m[_n_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Rm | VQSUBT.U8 Qd, Qn, Rm | | |
| uint8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqsubq_m[_n_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Rm | VQSUBT.U16 Qd, Qn, Rm | | |
| uint16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vqsubq_m[_n_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Rm | VQSUBT.U32 Qd, Qn, Rm | | |
| uint32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vqsubq[_s8]( | a -> Qn | VQSUB.S8 Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vqsubq[_s16]( | a -> Qn | VQSUB.S16 Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vqsubq[_s32]( | a -> Qn | VQSUB.S32 Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vqsubq[_u8]( | a -> Qn | VQSUB.U8 Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vqsubq[_u16]( | a -> Qn | VQSUB.U16 Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vqsubq[_u32]( | a -> Qn | VQSUB.U32 Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqsubq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VQSUBT.S8 Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqsubq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VQSUBT.S16 Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqsubq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VQSUBT.S32 Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqsubq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VQSUBT.U8 Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqsubq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VQSUBT.U16 Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vqsubq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VQSUBT.U32 Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+------------------+---------------------------+
Rounding
~~~~~~~~
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+=====================================================+========================+========================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrndq[_f16](float16x8_t a) | a -> Qm | VRINTZ.F16 Qd, Qm | Qd -> result | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vrndq[_f32](float32x4_t a) | a -> Qm | VRINTZ.F32 Qd, Qm | Qd -> result | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrndq_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VRINTZT.F16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vrndq_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | p -> Rp | VRINTZT.F32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrndq_x[_f16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VRINTZT.F16 Qd, Qm | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vrndq_x[_f32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VRINTZT.F32 Qd, Qm | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrndnq[_f16](float16x8_t a) | a -> Qm | VRINTN.F16 Qd, Qm | Qd -> result | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vrndnq[_f32](float32x4_t a) | a -> Qm | VRINTN.F32 Qd, Qm | Qd -> result | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrndnq_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VRINTNT.F16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vrndnq_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | p -> Rp | VRINTNT.F32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrndnq_x[_f16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VRINTNT.F16 Qd, Qm | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vrndnq_x[_f32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VRINTNT.F32 Qd, Qm | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrndmq[_f16](float16x8_t a) | a -> Qm | VRINTM.F16 Qd, Qm | Qd -> result | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vrndmq[_f32](float32x4_t a) | a -> Qm | VRINTM.F32 Qd, Qm | Qd -> result | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrndmq_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VRINTMT.F16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vrndmq_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | p -> Rp | VRINTMT.F32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrndmq_x[_f16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VRINTMT.F16 Qd, Qm | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vrndmq_x[_f32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VRINTMT.F32 Qd, Qm | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrndpq[_f16](float16x8_t a) | a -> Qm | VRINTP.F16 Qd, Qm | Qd -> result | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vrndpq[_f32](float32x4_t a) | a -> Qm | VRINTP.F32 Qd, Qm | Qd -> result | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrndpq_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VRINTPT.F16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vrndpq_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | p -> Rp | VRINTPT.F32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrndpq_x[_f16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VRINTPT.F16 Qd, Qm | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vrndpq_x[_f32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VRINTPT.F32 Qd, Qm | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrndaq[_f16](float16x8_t a) | a -> Qm | VRINTA.F16 Qd, Qm | Qd -> result | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vrndaq[_f32](float32x4_t a) | a -> Qm | VRINTA.F32 Qd, Qm | Qd -> result | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrndaq_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VRINTAT.F16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vrndaq_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | p -> Rp | VRINTAT.F32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrndaq_x[_f16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VRINTAT.F16 Qd, Qm | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vrndaq_x[_f32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VRINTAT.F32 Qd, Qm | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrndxq[_f16](float16x8_t a) | a -> Qm | VRINTX.F16 Qd, Qm | Qd -> result | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vrndxq[_f32](float32x4_t a) | a -> Qm | VRINTX.F32 Qd, Qm | Qd -> result | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrndxq_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VRINTXT.F16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vrndxq_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | p -> Rp | VRINTXT.F32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vrndxq_x[_f16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VRINTXT.F16 Qd, Qm | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vrndxq_x[_f32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VRINTXT.F32 Qd, Qm | | |
+-----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
Bit manipulation
================
Count leading sign bits
~~~~~~~~~~~~~~~~~~~~~~~
+------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+================================================+========================+======================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vclsq[_s8](int8x16_t a) | a -> Qm | VCLS.S8 Qd, Qm | Qd -> result | |
+------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vclsq[_s16](int16x8_t a) | a -> Qm | VCLS.S16 Qd, Qm | Qd -> result | |
+------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vclsq[_s32](int32x4_t a) | a -> Qm | VCLS.S32 Qd, Qm | Qd -> result | |
+------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vclsq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | p -> Rp | VCLST.S8 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vclsq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | p -> Rp | VCLST.S16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vclsq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| int32x4_t a, | p -> Rp | VCLST.S32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vclsq_x[_s8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCLST.S8 Qd, Qm | | |
+------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vclsq_x[_s16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCLST.S16 Qd, Qm | | |
+------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vclsq_x[_s32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCLST.S32 Qd, Qm | | |
+------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
Count leading zeros
~~~~~~~~~~~~~~~~~~~
+--------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+==================================================+========================+======================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vclzq[_s8](int8x16_t a) | a -> Qm | VCLZ.I8 Qd, Qm | Qd -> result | |
+--------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vclzq[_s16](int16x8_t a) | a -> Qm | VCLZ.I16 Qd, Qm | Qd -> result | |
+--------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vclzq[_s32](int32x4_t a) | a -> Qm | VCLZ.I32 Qd, Qm | Qd -> result | |
+--------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vclzq[_u8](uint8x16_t a) | a -> Qm | VCLZ.I8 Qd, Qm | Qd -> result | |
+--------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vclzq[_u16](uint16x8_t a) | a -> Qm | VCLZ.I16 Qd, Qm | Qd -> result | |
+--------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vclzq[_u32](uint32x4_t a) | a -> Qm | VCLZ.I32 Qd, Qm | Qd -> result | |
+--------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vclzq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | p -> Rp | VCLZT.I8 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vclzq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | p -> Rp | VCLZT.I16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vclzq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| int32x4_t a, | p -> Rp | VCLZT.I32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vclzq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qm | VPST | | |
| uint8x16_t a, | p -> Rp | VCLZT.I8 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vclzq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| uint16x8_t a, | p -> Rp | VCLZT.I16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vclzq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| uint32x4_t a, | p -> Rp | VCLZT.I32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vclzq_x[_s8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCLZT.I8 Qd, Qm | | |
+--------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vclzq_x[_s16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCLZT.I16 Qd, Qm | | |
+--------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vclzq_x[_s32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCLZT.I32 Qd, Qm | | |
+--------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vclzq_x[_u8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCLZT.I8 Qd, Qm | | |
+--------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vclzq_x[_u16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCLZT.I16 Qd, Qm | | |
+--------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vclzq_x[_u32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCLZT.I32 Qd, Qm | | |
+--------------------------------------------------+------------------------+----------------------+------------------+---------------------------+
Bitwise clear
~~~~~~~~~~~~~
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+=========================================+==============================+=========================+===================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vbicq[_s8]( | a -> Qn | VBIC Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vbicq[_s16]( | a -> Qn | VBIC Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vbicq[_s32]( | a -> Qn | VBIC Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vbicq[_u8]( | a -> Qn | VBIC Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vbicq[_u16]( | a -> Qn | VBIC Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vbicq[_u32]( | a -> Qn | VBIC Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vbicq[_f16]( | a -> Qn | VBIC Qd, Qn, Qm | Qd -> result | |
| float16x8_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vbicq[_f32]( | a -> Qn | VBIC Qd, Qn, Qm | Qd -> result | |
| float32x4_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vbicq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VBICT Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vbicq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VBICT Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vbicq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VBICT Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vbicq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VBICT Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vbicq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VBICT Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vbicq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VBICT Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vbicq_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Qm | VBICT Qd, Qn, Qm | | |
| float16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vbicq_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Qm | VBICT Qd, Qn, Qm | | |
| float32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vbicq_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VBICT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vbicq_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VBICT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vbicq_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VBICT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vbicq_x[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VBICT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vbicq_x[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VBICT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vbicq_x[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VBICT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vbicq_x[_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VBICT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vbicq_x[_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VBICT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vbicq[_n_s16]( | a -> Qda | VBIC.I16 Qda, #imm | Qda -> result | |
| int16x8_t a, | imm in AdvSIMDExpandImm | | | |
| const int16_t imm) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vbicq[_n_s32]( | a -> Qda | VBIC.I32 Qda, #imm | Qda -> result | |
| int32x4_t a, | imm in AdvSIMDExpandImm | | | |
| const int32_t imm) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vbicq[_n_u16]( | a -> Qda | VBIC.I16 Qda, #imm | Qda -> result | |
| uint16x8_t a, | imm in AdvSIMDExpandImm | | | |
| const uint16_t imm) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vbicq[_n_u32]( | a -> Qda | VBIC.I32 Qda, #imm | Qda -> result | |
| uint32x4_t a, | imm in AdvSIMDExpandImm | | | |
| const uint32_t imm) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vbicq_m_n[_s16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| int16x8_t a, | imm in AdvSIMDExpandImm | VPST | | |
| const int16_t imm, | p -> Rp | VBICT.I16 Qda, #imm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vbicq_m_n[_s32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| int32x4_t a, | imm in AdvSIMDExpandImm | VPST | | |
| const int32_t imm, | p -> Rp | VBICT.I32 Qda, #imm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vbicq_m_n[_u16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint16x8_t a, | imm in AdvSIMDExpandImm | VPST | | |
| const uint16_t imm, | p -> Rp | VBICT.I16 Qda, #imm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vbicq_m_n[_u32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint32x4_t a, | imm in AdvSIMDExpandImm | VPST | | |
| const uint32_t imm, | p -> Rp | VBICT.I32 Qda, #imm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
Logical
=======
Negate
~~~~~~
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+====================================================+========================+=======================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vnegq[_f16](float16x8_t a) | a -> Qm | VNEG.F16 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vnegq[_f32](float32x4_t a) | a -> Qm | VNEG.F32 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vnegq[_s8](int8x16_t a) | a -> Qm | VNEG.S8 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vnegq[_s16](int16x8_t a) | a -> Qm | VNEG.S16 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vnegq[_s32](int32x4_t a) | a -> Qm | VNEG.S32 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vnegq_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VNEGT.F16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vnegq_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | p -> Rp | VNEGT.F32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vnegq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | p -> Rp | VNEGT.S8 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vnegq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | p -> Rp | VNEGT.S16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vnegq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| int32x4_t a, | p -> Rp | VNEGT.S32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vnegq_x[_f16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VNEGT.F16 Qd, Qm | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vnegq_x[_f32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VNEGT.F32 Qd, Qm | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vnegq_x[_s8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VNEGT.S8 Qd, Qm | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vnegq_x[_s16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VNEGT.S16 Qd, Qm | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vnegq_x[_s32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VNEGT.S32 Qd, Qm | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vqnegq[_s8](int8x16_t a) | a -> Qm | VQNEG.S8 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vqnegq[_s16](int16x8_t a) | a -> Qm | VQNEG.S16 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vqnegq[_s32](int32x4_t a) | a -> Qm | VQNEG.S32 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqnegq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | p -> Rp | VQNEGT.S8 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqnegq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | p -> Rp | VQNEGT.S16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqnegq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| int32x4_t a, | p -> Rp | VQNEGT.S32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+-----------------------+------------------+---------------------------+
AND
~~~
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+========================================+========================+======================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vandq[_s8]( | a -> Qn | VAND Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vandq[_s16]( | a -> Qn | VAND Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vandq[_s32]( | a -> Qn | VAND Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vandq[_u8]( | a -> Qn | VAND Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vandq[_u16]( | a -> Qn | VAND Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vandq[_u32]( | a -> Qn | VAND Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vandq[_f16]( | a -> Qn | VAND Qd, Qn, Qm | Qd -> result | |
| float16x8_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vandq[_f32]( | a -> Qn | VAND Qd, Qn, Qm | Qd -> result | |
| float32x4_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vandq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VANDT Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vandq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VANDT Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vandq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VANDT Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vandq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VANDT Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vandq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VANDT Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vandq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VANDT Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vandq_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Qm | VANDT Qd, Qn, Qm | | |
| float16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vandq_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Qm | VANDT Qd, Qn, Qm | | |
| float32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vandq_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VANDT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vandq_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VANDT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vandq_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VANDT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vandq_x[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VANDT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vandq_x[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VANDT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vandq_x[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VANDT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vandq_x[_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VANDT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vandq_x[_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VANDT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
Exclusive OR
~~~~~~~~~~~~
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+========================================+========================+======================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]veorq[_s8]( | a -> Qn | VEOR Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]veorq[_s16]( | a -> Qn | VEOR Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]veorq[_s32]( | a -> Qn | VEOR Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]veorq[_u8]( | a -> Qn | VEOR Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]veorq[_u16]( | a -> Qn | VEOR Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]veorq[_u32]( | a -> Qn | VEOR Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]veorq[_f16]( | a -> Qn | VEOR Qd, Qn, Qm | Qd -> result | |
| float16x8_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]veorq[_f32]( | a -> Qn | VEOR Qd, Qn, Qm | Qd -> result | |
| float32x4_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]veorq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VEORT Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]veorq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VEORT Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]veorq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VEORT Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]veorq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VEORT Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]veorq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VEORT Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]veorq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VEORT Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]veorq_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Qm | VEORT Qd, Qn, Qm | | |
| float16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]veorq_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Qm | VEORT Qd, Qn, Qm | | |
| float32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]veorq_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VEORT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]veorq_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VEORT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]veorq_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VEORT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]veorq_x[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VEORT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]veorq_x[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VEORT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]veorq_x[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VEORT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]veorq_x[_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VEORT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]veorq_x[_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VEORT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
Bitwise NOT
~~~~~~~~~~~
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+========================================================+==============================+========================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vmvnq[_s8](int8x16_t a) | a -> Qm | VMVN Qd, Qm | Qd -> result | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vmvnq[_s16](int16x8_t a) | a -> Qm | VMVN Qd, Qm | Qd -> result | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vmvnq[_s32](int32x4_t a) | a -> Qm | VMVN Qd, Qm | Qd -> result | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vmvnq[_u8](uint8x16_t a) | a -> Qm | VMVN Qd, Qm | Qd -> result | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vmvnq[_u16](uint16x8_t a) | a -> Qm | VMVN Qd, Qm | Qd -> result | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vmvnq[_u32](uint32x4_t a) | a -> Qm | VMVN Qd, Qm | Qd -> result | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vmvnq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | p -> Rp | VMVNT Qd, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmvnq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | p -> Rp | VMVNT Qd, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmvnq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| int32x4_t a, | p -> Rp | VMVNT Qd, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmvnq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qm | VPST | | |
| uint8x16_t a, | p -> Rp | VMVNT Qd, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmvnq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| uint16x8_t a, | p -> Rp | VMVNT Qd, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmvnq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| uint32x4_t a, | p -> Rp | VMVNT Qd, Qm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vmvnq_x[_s8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VMVNT Qd, Qm | | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmvnq_x[_s16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VMVNT Qd, Qm | | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmvnq_x[_s32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VMVNT Qd, Qm | | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmvnq_x[_u8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VMVNT Qd, Qm | | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmvnq_x[_u16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VMVNT Qd, Qm | | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmvnq_x[_u32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VMVNT Qd, Qm | | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmvnq_n_s16(const int16_t imm) | imm in AdvSIMDExpandImm | VMVN.I16 Qd, #imm | Qd -> result | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmvnq_n_s32(const int32_t imm) | imm in AdvSIMDExpandImm | VMVN.I32 Qd, #imm | Qd -> result | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmvnq_n_u16(const uint16_t imm) | imm in AdvSIMDExpandImm | VMVN.I16 Qd, #imm | Qd -> result | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmvnq_n_u32(const uint32_t imm) | imm in AdvSIMDExpandImm | VMVN.I32 Qd, #imm | Qd -> result | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmvnq_m[_n_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | imm in AdvSIMDExpandImm | VPST | | |
| const int16_t imm, | p -> Rp | VMVNT.I16 Qd, #imm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmvnq_m[_n_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | imm in AdvSIMDExpandImm | VPST | | |
| const int32_t imm, | p -> Rp | VMVNT.I32 Qd, #imm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmvnq_m[_n_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | imm in AdvSIMDExpandImm | VPST | | |
| const uint16_t imm, | p -> Rp | VMVNT.I16 Qd, #imm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmvnq_m[_n_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | imm in AdvSIMDExpandImm | VPST | | |
| const uint32_t imm, | p -> Rp | VMVNT.I32 Qd, #imm | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmvnq_x_n_s16( | imm in AdvSIMDExpandImm | VMSR P0, Rp | Qd -> result | |
| const int16_t imm, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VMVNT.I16 Qd, #imm | | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmvnq_x_n_s32( | imm in AdvSIMDExpandImm | VMSR P0, Rp | Qd -> result | |
| const int32_t imm, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VMVNT.I32 Qd, #imm | | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmvnq_x_n_u16( | imm in AdvSIMDExpandImm | VMSR P0, Rp | Qd -> result | |
| const uint16_t imm, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VMVNT.I16 Qd, #imm | | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmvnq_x_n_u32( | imm in AdvSIMDExpandImm | VMSR P0, Rp | Qd -> result | |
| const uint32_t imm, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VMVNT.I32 Qd, #imm | | |
+--------------------------------------------------------+------------------------------+------------------------+------------------+---------------------------+
OR-NOT
~~~~~~
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+========================================+========================+======================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vornq[_f16]( | a -> Qn | VORN Qd, Qn, Qm | Qd -> result | |
| float16x8_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vornq[_f32]( | a -> Qn | VORN Qd, Qn, Qm | Qd -> result | |
| float32x4_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vornq[_s8]( | a -> Qn | VORN Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vornq[_s16]( | a -> Qn | VORN Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vornq[_s32]( | a -> Qn | VORN Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vornq[_u8]( | a -> Qn | VORN Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vornq[_u16]( | a -> Qn | VORN Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vornq[_u32]( | a -> Qn | VORN Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vornq_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Qm | VORNT Qd, Qn, Qm | | |
| float16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vornq_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Qm | VORNT Qd, Qn, Qm | | |
| float32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vornq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VORNT Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vornq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VORNT Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vornq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VORNT Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vornq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VORNT Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vornq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VORNT Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vornq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VORNT Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vornq_x[_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VORNT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vornq_x[_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VORNT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vornq_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VORNT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vornq_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VORNT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vornq_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VORNT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vornq_x[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VORNT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vornq_x[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VORNT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vornq_x[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VORNT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+----------------------+------------------+---------------------------+
OR
~~
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+=========================================+==============================+=========================+===================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vorrq[_f16]( | a -> Qn | VORR Qd, Qn, Qm | Qd -> result | |
| float16x8_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vorrq[_f32]( | a -> Qn | VORR Qd, Qn, Qm | Qd -> result | |
| float32x4_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vorrq[_s8]( | a -> Qn | VORR Qd, Qn, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vorrq[_s16]( | a -> Qn | VORR Qd, Qn, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vorrq[_s32]( | a -> Qn | VORR Qd, Qn, Qm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vorrq[_u8]( | a -> Qn | VORR Qd, Qn, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vorrq[_u16]( | a -> Qn | VORR Qd, Qn, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vorrq[_u32]( | a -> Qn | VORR Qd, Qn, Qm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vorrq_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Qm | VORRT Qd, Qn, Qm | | |
| float16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vorrq_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Qm | VORRT Qd, Qn, Qm | | |
| float32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vorrq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VORRT Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vorrq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VORRT Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vorrq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VORRT Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vorrq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VORRT Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vorrq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VORRT Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vorrq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VORRT Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vorrq_x[_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VORRT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vorrq_x[_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VORRT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vorrq_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VORRT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vorrq_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VORRT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vorrq_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VORRT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vorrq_x[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VORRT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vorrq_x[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VORRT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vorrq_x[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VORRT Qd, Qn, Qm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vorrq[_n_s16]( | a -> Qda | VORR.I16 Qda, #imm | Qda -> result | |
| int16x8_t a, | imm in AdvSIMDExpandImm | | | |
| const int16_t imm) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vorrq[_n_s32]( | a -> Qda | VORR.I32 Qda, #imm | Qda -> result | |
| int32x4_t a, | imm in AdvSIMDExpandImm | | | |
| const int32_t imm) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vorrq[_n_u16]( | a -> Qda | VORR.I16 Qda, #imm | Qda -> result | |
| uint16x8_t a, | imm in AdvSIMDExpandImm | | | |
| const uint16_t imm) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vorrq[_n_u32]( | a -> Qda | VORR.I32 Qda, #imm | Qda -> result | |
| uint32x4_t a, | imm in AdvSIMDExpandImm | | | |
| const uint32_t imm) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vorrq_m_n[_s16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| int16x8_t a, | imm in AdvSIMDExpandImm | VPST | | |
| const int16_t imm, | p -> Rp | VORRT.I16 Qda, #imm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vorrq_m_n[_s32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| int32x4_t a, | imm in AdvSIMDExpandImm | VPST | | |
| const int32_t imm, | p -> Rp | VORRT.I32 Qda, #imm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vorrq_m_n[_u16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint16x8_t a, | imm in AdvSIMDExpandImm | VPST | | |
| const uint16_t imm, | p -> Rp | VORRT.I16 Qda, #imm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vorrq_m_n[_u32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint32x4_t a, | imm in AdvSIMDExpandImm | VPST | | |
| const uint32_t imm, | p -> Rp | VORRT.I32 Qda, #imm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------------+-------------------------+-------------------+---------------------------+
Complex arithmetic
==================
Complex addition
~~~~~~~~~~~~~~~~
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+================================================+========================+==================================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vcaddq_rot90[_f16]( | a -> Qn | VCADD.F16 Qd, Qn, Qm, #90 | Qd -> result | |
| float16x8_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vcaddq_rot90[_f32]( | a -> Qn | VCADD.F32 Qd, Qn, Qm, #90 | Qd -> result | |
| float32x4_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vcaddq_rot90[_s8]( | a -> Qn | VCADD.I8 Qd, Qn, Qm, #90 | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vcaddq_rot90[_s16]( | a -> Qn | VCADD.I16 Qd, Qn, Qm, #90 | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vcaddq_rot90[_s32]( | a -> Qn | VCADD.I32 Qd, Qn, Qm, #90 | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vcaddq_rot90[_u8]( | a -> Qn | VCADD.I8 Qd, Qn, Qm, #90 | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vcaddq_rot90[_u16]( | a -> Qn | VCADD.I16 Qd, Qn, Qm, #90 | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vcaddq_rot90[_u32]( | a -> Qn | VCADD.I32 Qd, Qn, Qm, #90 | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vcaddq_rot270[_f16]( | a -> Qn | VCADD.F16 Qd, Qn, Qm, #270 | Qd -> result | |
| float16x8_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vcaddq_rot270[_f32]( | a -> Qn | VCADD.F32 Qd, Qn, Qm, #270 | Qd -> result | |
| float32x4_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vcaddq_rot270[_s8]( | a -> Qn | VCADD.I8 Qd, Qn, Qm, #270 | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vcaddq_rot270[_s16]( | a -> Qn | VCADD.I16 Qd, Qn, Qm, #270 | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vcaddq_rot270[_s32]( | a -> Qn | VCADD.I32 Qd, Qn, Qm, #270 | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vcaddq_rot270[_u8]( | a -> Qn | VCADD.I8 Qd, Qn, Qm, #270 | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vcaddq_rot270[_u16]( | a -> Qn | VCADD.I16 Qd, Qn, Qm, #270 | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vcaddq_rot270[_u32]( | a -> Qn | VCADD.I32 Qd, Qn, Qm, #270 | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcaddq_rot90_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Qm | VCADDT.F16 Qd, Qn, Qm, #90 | | |
| float16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcaddq_rot90_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Qm | VCADDT.F32 Qd, Qn, Qm, #90 | | |
| float32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vcaddq_rot90_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VCADDT.I8 Qd, Qn, Qm, #90 | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vcaddq_rot90_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VCADDT.I16 Qd, Qn, Qm, #90 | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vcaddq_rot90_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VCADDT.I32 Qd, Qn, Qm, #90 | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vcaddq_rot90_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VCADDT.I8 Qd, Qn, Qm, #90 | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vcaddq_rot90_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VCADDT.I16 Qd, Qn, Qm, #90 | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vcaddq_rot90_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VCADDT.I32 Qd, Qn, Qm, #90 | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcaddq_rot270_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Qm | VCADDT.F16 Qd, Qn, Qm, #270 | | |
| float16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcaddq_rot270_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Qm | VCADDT.F32 Qd, Qn, Qm, #270 | | |
| float32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vcaddq_rot270_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VCADDT.I8 Qd, Qn, Qm, #270 | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vcaddq_rot270_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VCADDT.I16 Qd, Qn, Qm, #270 | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vcaddq_rot270_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VCADDT.I32 Qd, Qn, Qm, #270 | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vcaddq_rot270_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Qm | VCADDT.I8 Qd, Qn, Qm, #270 | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vcaddq_rot270_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Qm | VCADDT.I16 Qd, Qn, Qm, #270 | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vcaddq_rot270_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Qm | VCADDT.I32 Qd, Qn, Qm, #270 | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcaddq_rot90_x[_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VCADDT.F16 Qd, Qn, Qm, #90 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcaddq_rot90_x[_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VCADDT.F32 Qd, Qn, Qm, #90 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vcaddq_rot90_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VCADDT.I8 Qd, Qn, Qm, #90 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vcaddq_rot90_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VCADDT.I16 Qd, Qn, Qm, #90 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vcaddq_rot90_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VCADDT.I32 Qd, Qn, Qm, #90 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vcaddq_rot90_x[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VCADDT.I8 Qd, Qn, Qm, #90 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vcaddq_rot90_x[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VCADDT.I16 Qd, Qn, Qm, #90 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vcaddq_rot90_x[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VCADDT.I32 Qd, Qn, Qm, #90 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcaddq_rot270_x[_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VCADDT.F16 Qd, Qn, Qm, #270 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcaddq_rot270_x[_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VCADDT.F32 Qd, Qn, Qm, #270 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vcaddq_rot270_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VCADDT.I8 Qd, Qn, Qm, #270 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vcaddq_rot270_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VCADDT.I16 Qd, Qn, Qm, #270 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vcaddq_rot270_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VCADDT.I32 Qd, Qn, Qm, #270 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vcaddq_rot270_x[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | p -> Rp | VCADDT.I8 Qd, Qn, Qm, #270 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vcaddq_rot270_x[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VCADDT.I16 Qd, Qn, Qm, #270 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vcaddq_rot270_x[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VCADDT.I32 Qd, Qn, Qm, #270 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vhcaddq_rot90[_s8]( | a -> Qn | VHCADD.S8 Qd, Qn, Qm, #90 | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vhcaddq_rot90[_s16]( | a -> Qn | VHCADD.S16 Qd, Qn, Qm, #90 | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vhcaddq_rot90[_s32]( | a -> Qn | VHCADD.S32 Qd, Qn, Qm, #90 | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vhcaddq_rot90_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VHCADDT.S8 Qd, Qn, Qm, #90 | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vhcaddq_rot90_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VHCADDT.S16 Qd, Qn, Qm, #90 | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vhcaddq_rot90_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VHCADDT.S32 Qd, Qn, Qm, #90 | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vhcaddq_rot90_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VHCADDT.S8 Qd, Qn, Qm, #90 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vhcaddq_rot90_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VHCADDT.S16 Qd, Qn, Qm, #90 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vhcaddq_rot90_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VHCADDT.S32 Qd, Qn, Qm, #90 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vhcaddq_rot270[_s8]( | a -> Qn | VHCADD.S8 Qd, Qn, Qm, #270 | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vhcaddq_rot270[_s16]( | a -> Qn | VHCADD.S16 Qd, Qn, Qm, #270 | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vhcaddq_rot270[_s32]( | a -> Qn | VHCADD.S32 Qd, Qn, Qm, #270 | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vhcaddq_rot270_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Qm | VHCADDT.S8 Qd, Qn, Qm, #270 | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vhcaddq_rot270_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Qm | VHCADDT.S16 Qd, Qn, Qm, #270 | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vhcaddq_rot270_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Qm | VHCADDT.S32 Qd, Qn, Qm, #270 | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vhcaddq_rot270_x[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | p -> Rp | VHCADDT.S8 Qd, Qn, Qm, #270 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vhcaddq_rot270_x[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VHCADDT.S16 Qd, Qn, Qm, #270 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vhcaddq_rot270_x[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VHCADDT.S32 Qd, Qn, Qm, #270 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+------------------+---------------------------+
Complex multiply-accumulate
~~~~~~~~~~~~~~~~~~~~~~~~~~~
+------------------------------------------------+------------------------+----------------------------------+-------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+================================================+========================+==================================+===================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vcmlaq[_f16]( | a -> Qda | VCMLA.F16 Qda, Qn, Qm, #0 | Qda -> result | |
| float16x8_t a, | b -> Qn | | | |
| float16x8_t b, | c -> Qm | | | |
| float16x8_t c) | | | | |
+------------------------------------------------+------------------------+----------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vcmlaq[_f32]( | a -> Qda | VCMLA.F32 Qda, Qn, Qm, #0 | Qda -> result | |
| float32x4_t a, | b -> Qn | | | |
| float32x4_t b, | c -> Qm | | | |
| float32x4_t c) | | | | |
+------------------------------------------------+------------------------+----------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vcmlaq_rot90[_f16]( | a -> Qda | VCMLA.F16 Qda, Qn, Qm, #90 | Qda -> result | |
| float16x8_t a, | b -> Qn | | | |
| float16x8_t b, | c -> Qm | | | |
| float16x8_t c) | | | | |
+------------------------------------------------+------------------------+----------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vcmlaq_rot90[_f32]( | a -> Qda | VCMLA.F32 Qda, Qn, Qm, #90 | Qda -> result | |
| float32x4_t a, | b -> Qn | | | |
| float32x4_t b, | c -> Qm | | | |
| float32x4_t c) | | | | |
+------------------------------------------------+------------------------+----------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vcmlaq_rot180[_f16]( | a -> Qda | VCMLA.F16 Qda, Qn, Qm, #180 | Qda -> result | |
| float16x8_t a, | b -> Qn | | | |
| float16x8_t b, | c -> Qm | | | |
| float16x8_t c) | | | | |
+------------------------------------------------+------------------------+----------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vcmlaq_rot180[_f32]( | a -> Qda | VCMLA.F32 Qda, Qn, Qm, #180 | Qda -> result | |
| float32x4_t a, | b -> Qn | | | |
| float32x4_t b, | c -> Qm | | | |
| float32x4_t c) | | | | |
+------------------------------------------------+------------------------+----------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vcmlaq_rot270[_f16]( | a -> Qda | VCMLA.F16 Qda, Qn, Qm, #270 | Qda -> result | |
| float16x8_t a, | b -> Qn | | | |
| float16x8_t b, | c -> Qm | | | |
| float16x8_t c) | | | | |
+------------------------------------------------+------------------------+----------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vcmlaq_rot270[_f32]( | a -> Qda | VCMLA.F32 Qda, Qn, Qm, #270 | Qda -> result | |
| float32x4_t a, | b -> Qn | | | |
| float32x4_t b, | c -> Qm | | | |
| float32x4_t c) | | | | |
+------------------------------------------------+------------------------+----------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcmlaq_m[_f16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| float16x8_t a, | b -> Qn | VPST | | |
| float16x8_t b, | c -> Qm | VCMLAT.F16 Qda, Qn, Qm, #0 | | |
| float16x8_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcmlaq_m[_f32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| float32x4_t a, | b -> Qn | VPST | | |
| float32x4_t b, | c -> Qm | VCMLAT.F32 Qda, Qn, Qm, #0 | | |
| float32x4_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcmlaq_rot90_m[_f16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| float16x8_t a, | b -> Qn | VPST | | |
| float16x8_t b, | c -> Qm | VCMLAT.F16 Qda, Qn, Qm, #90 | | |
| float16x8_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcmlaq_rot90_m[_f32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| float32x4_t a, | b -> Qn | VPST | | |
| float32x4_t b, | c -> Qm | VCMLAT.F32 Qda, Qn, Qm, #90 | | |
| float32x4_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcmlaq_rot180_m[_f16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| float16x8_t a, | b -> Qn | VPST | | |
| float16x8_t b, | c -> Qm | VCMLAT.F16 Qda, Qn, Qm, #180 | | |
| float16x8_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcmlaq_rot180_m[_f32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| float32x4_t a, | b -> Qn | VPST | | |
| float32x4_t b, | c -> Qm | VCMLAT.F32 Qda, Qn, Qm, #180 | | |
| float32x4_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcmlaq_rot270_m[_f16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| float16x8_t a, | b -> Qn | VPST | | |
| float16x8_t b, | c -> Qm | VCMLAT.F16 Qda, Qn, Qm, #270 | | |
| float16x8_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcmlaq_rot270_m[_f32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| float32x4_t a, | b -> Qn | VPST | | |
| float32x4_t b, | c -> Qm | VCMLAT.F32 Qda, Qn, Qm, #270 | | |
| float32x4_t c, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+----------------------------------+-------------------+---------------------------+
Complex multiply
~~~~~~~~~~~~~~~~
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+================================================+========================+=================================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcmulq[_f16]( | a -> Qn | VCMUL.F16 Qd, Qn, Qm, #0 | Qd -> result | |
| float16x8_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcmulq[_f32]( | a -> Qn | VCMUL.F32 Qd, Qn, Qm, #0 | Qd -> result | |
| float32x4_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcmulq_rot90[_f16]( | a -> Qn | VCMUL.F16 Qd, Qn, Qm, #90 | Qd -> result | |
| float16x8_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcmulq_rot90[_f32]( | a -> Qn | VCMUL.F32 Qd, Qn, Qm, #90 | Qd -> result | |
| float32x4_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcmulq_rot180[_f16]( | a -> Qn | VCMUL.F16 Qd, Qn, Qm, #180 | Qd -> result | |
| float16x8_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcmulq_rot180[_f32]( | a -> Qn | VCMUL.F32 Qd, Qn, Qm, #180 | Qd -> result | |
| float32x4_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcmulq_rot270[_f16]( | a -> Qn | VCMUL.F16 Qd, Qn, Qm, #270 | Qd -> result | |
| float16x8_t a, | b -> Qm | | | |
| float16x8_t b) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcmulq_rot270[_f32]( | a -> Qn | VCMUL.F32 Qd, Qn, Qm, #270 | Qd -> result | |
| float32x4_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcmulq_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Qm | VCMULT.F16 Qd, Qn, Qm, #0 | | |
| float16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcmulq_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Qm | VCMULT.F32 Qd, Qn, Qm, #0 | | |
| float32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcmulq_rot90_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Qm | VCMULT.F16 Qd, Qn, Qm, #90 | | |
| float16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcmulq_rot90_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Qm | VCMULT.F32 Qd, Qn, Qm, #90 | | |
| float32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcmulq_rot180_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Qm | VCMULT.F16 Qd, Qn, Qm, #180 | | |
| float16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcmulq_rot180_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Qm | VCMULT.F32 Qd, Qn, Qm, #180 | | |
| float32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcmulq_rot270_m[_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Qm | VCMULT.F16 Qd, Qn, Qm, #270 | | |
| float16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcmulq_rot270_m[_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Qm | VCMULT.F32 Qd, Qn, Qm, #270 | | |
| float32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcmulq_x[_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VCMULT.F16 Qd, Qn, Qm, #0 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcmulq_x[_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VCMULT.F32 Qd, Qn, Qm, #0 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcmulq_rot90_x[_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VCMULT.F16 Qd, Qn, Qm, #90 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcmulq_rot90_x[_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VCMULT.F32 Qd, Qn, Qm, #90 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcmulq_rot180_x[_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VCMULT.F16 Qd, Qn, Qm, #180 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcmulq_rot180_x[_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VCMULT.F32 Qd, Qn, Qm, #180 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcmulq_rot270_x[_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float16x8_t b, | p -> Rp | VCMULT.F16 Qd, Qn, Qm, #270 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcmulq_rot270_x[_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VCMULT.F32 Qd, Qn, Qm, #270 | | |
| mve_pred16_t p) | | | | |
+------------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
Load
====
Stride
~~~~~~
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+==============================================================+========================+================================+===========================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16x2_t [__arm_]vld2q[_s8](int8_t const *addr) | addr -> Rn | VLD20.8 {Qd - Qd2}, [Rn] | Qd -> result.val[0] | |
| | | VLD21.8 {Qd - Qd2}, [Rn] | Qd2 -> result.val[1] | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8x2_t [__arm_]vld2q[_s16](int16_t const *addr) | addr -> Rn | VLD20.16 {Qd - Qd2}, [Rn] | Qd -> result.val[0] | |
| | | VLD21.16 {Qd - Qd2}, [Rn] | Qd2 -> result.val[1] | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4x2_t [__arm_]vld2q[_s32](int32_t const *addr) | addr -> Rn | VLD20.32 {Qd - Qd2}, [Rn] | Qd -> result.val[0] | |
| | | VLD21.32 {Qd - Qd2}, [Rn] | Qd2 -> result.val[1] | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16x2_t [__arm_]vld2q[_u8](uint8_t const *addr) | addr -> Rn | VLD20.8 {Qd - Qd2}, [Rn] | Qd -> result.val[0] | |
| | | VLD21.8 {Qd - Qd2}, [Rn] | Qd2 -> result.val[1] | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8x2_t [__arm_]vld2q[_u16](uint16_t const *addr) | addr -> Rn | VLD20.16 {Qd - Qd2}, [Rn] | Qd -> result.val[0] | |
| | | VLD21.16 {Qd - Qd2}, [Rn] | Qd2 -> result.val[1] | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4x2_t [__arm_]vld2q[_u32](uint32_t const *addr) | addr -> Rn | VLD20.32 {Qd - Qd2}, [Rn] | Qd -> result.val[0] | |
| | | VLD21.32 {Qd - Qd2}, [Rn] | Qd2 -> result.val[1] | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8x2_t [__arm_]vld2q[_f16](float16_t const *addr) | addr -> Rn | VLD20.16 {Qd - Qd2}, [Rn] | Qd -> result.val[0] | |
| | | VLD21.16 {Qd - Qd2}, [Rn] | Qd2 -> result.val[1] | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4x2_t [__arm_]vld2q[_f32](float32_t const *addr) | addr -> Rn | VLD20.32 {Qd - Qd2}, [Rn] | Qd -> result.val[0] | |
| | | VLD21.32 {Qd - Qd2}, [Rn] | Qd2 -> result.val[1] | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16x4_t [__arm_]vld4q[_s8](int8_t const *addr) | addr -> Rn | VLD40.8 {Qd - Qd4}, [Rn] | Qd -> result.val[0] | |
| | | VLD41.8 {Qd - Qd4}, [Rn] | Qd2 -> result.val[1] | |
| | | VLD42.8 {Qd - Qd4}, [Rn] | Qd3 -> result.val[2] | |
| | | VLD43.8 {Qd - Qd4}, [Rn] | Qd4 -> result.val[3] | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8x4_t [__arm_]vld4q[_s16](int16_t const *addr) | addr -> Rn | VLD40.16 {Qd - Qd4}, [Rn] | Qd -> result.val[0] | |
| | | VLD41.16 {Qd - Qd4}, [Rn] | Qd2 -> result.val[1] | |
| | | VLD42.16 {Qd - Qd4}, [Rn] | Qd3 -> result.val[2] | |
| | | VLD43.16 {Qd - Qd4}, [Rn] | Qd4 -> result.val[3] | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4x4_t [__arm_]vld4q[_s32](int32_t const *addr) | addr -> Rn | VLD40.32 {Qd - Qd4}, [Rn] | Qd -> result.val[0] | |
| | | VLD41.32 {Qd - Qd4}, [Rn] | Qd2 -> result.val[1] | |
| | | VLD42.32 {Qd - Qd4}, [Rn] | Qd3 -> result.val[2] | |
| | | VLD43.32 {Qd - Qd4}, [Rn] | Qd4 -> result.val[3] | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16x4_t [__arm_]vld4q[_u8](uint8_t const *addr) | addr -> Rn | VLD40.8 {Qd - Qd4}, [Rn] | Qd -> result.val[0] | |
| | | VLD41.8 {Qd - Qd4}, [Rn] | Qd2 -> result.val[1] | |
| | | VLD42.8 {Qd - Qd4}, [Rn] | Qd3 -> result.val[2] | |
| | | VLD43.8 {Qd - Qd4}, [Rn] | Qd4 -> result.val[3] | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8x4_t [__arm_]vld4q[_u16](uint16_t const *addr) | addr -> Rn | VLD40.16 {Qd - Qd4}, [Rn] | Qd -> result.val[0] | |
| | | VLD41.16 {Qd - Qd4}, [Rn] | Qd2 -> result.val[1] | |
| | | VLD42.16 {Qd - Qd4}, [Rn] | Qd3 -> result.val[2] | |
| | | VLD43.16 {Qd - Qd4}, [Rn] | Qd4 -> result.val[3] | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4x4_t [__arm_]vld4q[_u32](uint32_t const *addr) | addr -> Rn | VLD40.32 {Qd - Qd4}, [Rn] | Qd -> result.val[0] | |
| | | VLD41.32 {Qd - Qd4}, [Rn] | Qd2 -> result.val[1] | |
| | | VLD42.32 {Qd - Qd4}, [Rn] | Qd3 -> result.val[2] | |
| | | VLD43.32 {Qd - Qd4}, [Rn] | Qd4 -> result.val[3] | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8x4_t [__arm_]vld4q[_f16](float16_t const *addr) | addr -> Rn | VLD40.16 {Qd - Qd4}, [Rn] | Qd -> result.val[0] | |
| | | VLD41.16 {Qd - Qd4}, [Rn] | Qd2 -> result.val[1] | |
| | | VLD42.16 {Qd - Qd4}, [Rn] | Qd3 -> result.val[2] | |
| | | VLD43.16 {Qd - Qd4}, [Rn] | Qd4 -> result.val[3] | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4x4_t [__arm_]vld4q[_f32](float32_t const *addr) | addr -> Rn | VLD40.32 {Qd - Qd4}, [Rn] | Qd -> result.val[0] | |
| | | VLD41.32 {Qd - Qd4}, [Rn] | Qd2 -> result.val[1] | |
| | | VLD42.32 {Qd - Qd4}, [Rn] | Qd3 -> result.val[2] | |
| | | VLD43.32 {Qd - Qd4}, [Rn] | Qd4 -> result.val[3] | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vld1q[_s8](int8_t const *base) | base -> Rn | VLDRB.8 Qd, [Rn] | Qd -> result | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vld1q[_s16](int16_t const *base) | base -> Rn | VLDRH.16 Qd, [Rn] | Qd -> result | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vld1q[_s32](int32_t const *base) | base -> Rn | VLDRW.32 Qd, [Rn] | Qd -> result | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vld1q[_u8](uint8_t const *base) | base -> Rn | VLDRB.8 Qd, [Rn] | Qd -> result | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vld1q[_u16](uint16_t const *base) | base -> Rn | VLDRH.16 Qd, [Rn] | Qd -> result | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vld1q[_u32](uint32_t const *base) | base -> Rn | VLDRW.32 Qd, [Rn] | Qd -> result | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vld1q[_f16](float16_t const *base) | base -> Rn | VLDRH.16 Qd, [Rn] | Qd -> result | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vld1q[_f32](float32_t const *base) | base -> Rn | VLDRW.32 Qd, [Rn] | Qd -> result | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vld1q_z[_s8]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| int8_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRBT.8 Qd, [Rn] | | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vld1q_z[_s16]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| int16_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRHT.16 Qd, [Rn] | | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vld1q_z[_s32]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| int32_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRWT.32 Qd, [Rn] | | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vld1q_z[_u8]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| uint8_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRBT.8 Qd, [Rn] | | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vld1q_z[_u16]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| uint16_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRHT.16 Qd, [Rn] | | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vld1q_z[_u32]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRWT.32 Qd, [Rn] | | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vld1q_z[_f16]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| float16_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRHT.16 Qd, [Rn] | | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vld1q_z[_f32]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| float32_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRWT.32 Qd, [Rn] | | |
+--------------------------------------------------------------+------------------------+--------------------------------+---------------------------+---------------------------+
Consecutive
~~~~~~~~~~~
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+===========================================================+========================+=========================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vldrbq_s8(int8_t const *base) | base -> Rn | VLDRB.8 Qd, [Rn] | Qd -> result | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vldrbq_s16(int8_t const *base) | base -> Rn | VLDRB.S16 Qd, [Rn] | Qd -> result | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vldrbq_s32(int8_t const *base) | base -> Rn | VLDRB.S32 Qd, [Rn] | Qd -> result | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vldrbq_u8(uint8_t const *base) | base -> Rn | VLDRB.8 Qd, [Rn] | Qd -> result | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vldrbq_u16(uint8_t const *base) | base -> Rn | VLDRB.U16 Qd, [Rn] | Qd -> result | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vldrbq_u32(uint8_t const *base) | base -> Rn | VLDRB.U32 Qd, [Rn] | Qd -> result | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vldrbq_z_s8( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| int8_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRBT.8 Qd, [Rn] | | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vldrbq_z_s16( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| int8_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRBT.S16 Qd, [Rn] | | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vldrbq_z_s32( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| int8_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRBT.S32 Qd, [Rn] | | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vldrbq_z_u8( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| uint8_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRBT.8 Qd, [Rn] | | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vldrbq_z_u16( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| uint8_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRBT.U16 Qd, [Rn] | | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vldrbq_z_u32( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| uint8_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRBT.U32 Qd, [Rn] | | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vldrhq_s16(int16_t const *base) | base -> Rn | VLDRH.16 Qd, [Rn] | Qd -> result | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vldrhq_s32(int16_t const *base) | base -> Rn | VLDRH.S32 Qd, [Rn] | Qd -> result | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vldrhq_u16(uint16_t const *base) | base -> Rn | VLDRH.16 Qd, [Rn] | Qd -> result | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vldrhq_u32(uint16_t const *base) | base -> Rn | VLDRH.U32 Qd, [Rn] | Qd -> result | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vldrhq_f16(float16_t const *base) | base -> Rn | VLDRH.16 Qd, [Rn] | Qd -> result | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vldrhq_z_s16( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| int16_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRHT.S16 Qd, [Rn] | | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vldrhq_z_s32( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| int16_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRHT.S32 Qd, [Rn] | | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vldrhq_z_u16( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| uint16_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRHT.U16 Qd, [Rn] | | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vldrhq_z_u32( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| uint16_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRHT.U32 Qd, [Rn] | | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vldrhq_z_f16( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| float16_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRHT.F16 Qd, [Rn] | | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vldrwq_s32(int32_t const *base) | base -> Rn | VLDRW.32 Qd, [Rn] | Qd -> result | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vldrwq_u32(uint32_t const *base) | base -> Rn | VLDRW.32 Qd, [Rn] | Qd -> result | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vldrwq_f32(float32_t const *base) | base -> Rn | VLDRW.32 Qd, [Rn] | Qd -> result | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vldrwq_z_s32( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| int32_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRWT.32 Qd, [Rn] | | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vldrwq_z_u32( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRWT.32 Qd, [Rn] | | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vldrwq_z_f32( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| float32_t const *base, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VLDRWT.32 Qd, [Rn] | | |
+-----------------------------------------------------------+------------------------+-------------------------+------------------+---------------------------+
Gather
~~~~~~
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+===============================================================+==============================+======================================+===================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vldrhq_gather_offset[_s16]( | base -> Rn | VLDRH.U16 Qd, [Rn, Qm] | Qd -> result | |
| int16_t const *base, | offset -> Qm | | | |
| uint16x8_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vldrhq_gather_offset[_s32]( | base -> Rn | VLDRH.S32 Qd, [Rn, Qm] | Qd -> result | |
| int16_t const *base, | offset -> Qm | | | |
| uint32x4_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vldrhq_gather_offset[_u16]( | base -> Rn | VLDRH.U16 Qd, [Rn, Qm] | Qd -> result | |
| uint16_t const *base, | offset -> Qm | | | |
| uint16x8_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vldrhq_gather_offset[_u32]( | base -> Rn | VLDRH.U32 Qd, [Rn, Qm] | Qd -> result | |
| uint16_t const *base, | offset -> Qm | | | |
| uint32x4_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vldrhq_gather_offset[_f16]( | base -> Rn | VLDRH.F16 Qd, [Rn, Qm] | Qd -> result | |
| float16_t const *base, | offset -> Qm | | | |
| uint16x8_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vldrhq_gather_offset_z[_s16]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| int16_t const *base, | offset -> Qm | VPST | | |
| uint16x8_t offset, | p -> Rp | VLDRHT.U16 Qd, [Rn, Qm] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vldrhq_gather_offset_z[_s32]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| int16_t const *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | p -> Rp | VLDRHT.S32 Qd, [Rn, Qm] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vldrhq_gather_offset_z[_u16]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| uint16_t const *base, | offset -> Qm | VPST | | |
| uint16x8_t offset, | p -> Rp | VLDRHT.U16 Qd, [Rn, Qm] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vldrhq_gather_offset_z[_u32]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| uint16_t const *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | p -> Rp | VLDRHT.U32 Qd, [Rn, Qm] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vldrhq_gather_offset_z[_f16]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| float16_t const *base, | offset -> Qm | VPST | | |
| uint16x8_t offset, | p -> Rp | VLDRHT.F16 Qd, [Rn, Qm] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vldrhq_gather_shifted_offset[_s16]( | base -> Rn | VLDRH.U16 Qd, [Rn, Qm, UXTW #1] | Qd -> result | |
| int16_t const *base, | offset -> Qm | | | |
| uint16x8_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vldrhq_gather_shifted_offset[_s32]( | base -> Rn | VLDRH.S32 Qd, [Rn, Qm, UXTW #1] | Qd -> result | |
| int16_t const *base, | offset -> Qm | | | |
| uint32x4_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vldrhq_gather_shifted_offset[_u16]( | base -> Rn | VLDRH.U16 Qd, [Rn, Qm, UXTW #1] | Qd -> result | |
| uint16_t const *base, | offset -> Qm | | | |
| uint16x8_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vldrhq_gather_shifted_offset[_u32]( | base -> Rn | VLDRH.U32 Qd, [Rn, Qm, UXTW #1] | Qd -> result | |
| uint16_t const *base, | offset -> Qm | | | |
| uint32x4_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vldrhq_gather_shifted_offset[_f16]( | base -> Rn | VLDRH.F16 Qd, [Rn, Qm, UXTW #1] | Qd -> result | |
| float16_t const *base, | offset -> Qm | | | |
| uint16x8_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vldrhq_gather_shifted_offset_z[_s16]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| int16_t const *base, | offset -> Qm | VPST | | |
| uint16x8_t offset, | p -> Rp | VLDRHT.U16 Qd, [Rn, Qm, UXTW #1] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vldrhq_gather_shifted_offset_z[_s32]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| int16_t const *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | p -> Rp | VLDRHT.S32 Qd, [Rn, Qm, UXTW #1] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vldrhq_gather_shifted_offset_z[_u16]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| uint16_t const *base, | offset -> Qm | VPST | | |
| uint16x8_t offset, | p -> Rp | VLDRHT.U16 Qd, [Rn, Qm, UXTW #1] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vldrhq_gather_shifted_offset_z[_u32]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| uint16_t const *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | p -> Rp | VLDRHT.U32 Qd, [Rn, Qm, UXTW #1] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vldrhq_gather_shifted_offset_z[_f16]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| float16_t const *base, | offset -> Qm | VPST | | |
| uint16x8_t offset, | p -> Rp | VLDRHT.F16 Qd, [Rn, Qm, UXTW #1] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vldrbq_gather_offset[_s8]( | base -> Rn | VLDRB.U8 Qd, [Rn, Qm] | Qd -> result | |
| int8_t const *base, | offset -> Qm | | | |
| uint8x16_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vldrbq_gather_offset[_s16]( | base -> Rn | VLDRB.S16 Qd, [Rn, Qm] | Qd -> result | |
| int8_t const *base, | offset -> Qm | | | |
| uint16x8_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vldrbq_gather_offset[_s32]( | base -> Rn | VLDRB.S32 Qd, [Rn, Qm] | Qd -> result | |
| int8_t const *base, | offset -> Qm | | | |
| uint32x4_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vldrbq_gather_offset[_u8]( | base -> Rn | VLDRB.U8 Qd, [Rn, Qm] | Qd -> result | |
| uint8_t const *base, | offset -> Qm | | | |
| uint8x16_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vldrbq_gather_offset[_u16]( | base -> Rn | VLDRB.U16 Qd, [Rn, Qm] | Qd -> result | |
| uint8_t const *base, | offset -> Qm | | | |
| uint16x8_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vldrbq_gather_offset[_u32]( | base -> Rn | VLDRB.U32 Qd, [Rn, Qm] | Qd -> result | |
| uint8_t const *base, | offset -> Qm | | | |
| uint32x4_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vldrbq_gather_offset_z[_s8]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| int8_t const *base, | offset -> Qm | VPST | | |
| uint8x16_t offset, | p -> Rp | VLDRBT.U8 Qd, [Rn, Qm] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vldrbq_gather_offset_z[_s16]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| int8_t const *base, | offset -> Qm | VPST | | |
| uint16x8_t offset, | p -> Rp | VLDRBT.S16 Qd, [Rn, Qm] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vldrbq_gather_offset_z[_s32]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| int8_t const *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | p -> Rp | VLDRBT.S32 Qd, [Rn, Qm] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vldrbq_gather_offset_z[_u8]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| uint8_t const *base, | offset -> Qm | VPST | | |
| uint8x16_t offset, | p -> Rp | VLDRBT.U8 Qd, [Rn, Qm] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vldrbq_gather_offset_z[_u16]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| uint8_t const *base, | offset -> Qm | VPST | | |
| uint16x8_t offset, | p -> Rp | VLDRBT.U16 Qd, [Rn, Qm] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vldrbq_gather_offset_z[_u32]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| uint8_t const *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | p -> Rp | VLDRBT.U32 Qd, [Rn, Qm] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vldrwq_gather_offset[_s32]( | base -> Rn | VLDRW.U32 Qd, [Rn, Qm] | Qd -> result | |
| int32_t const *base, | offset -> Qm | | | |
| uint32x4_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vldrwq_gather_offset[_u32]( | base -> Rn | VLDRW.U32 Qd, [Rn, Qm] | Qd -> result | |
| uint32_t const *base, | offset -> Qm | | | |
| uint32x4_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vldrwq_gather_offset[_f32]( | base -> Rn | VLDRW.U32 Qd, [Rn, Qm] | Qd -> result | |
| float32_t const *base, | offset -> Qm | | | |
| uint32x4_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vldrwq_gather_offset_z[_s32]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| int32_t const *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | p -> Rp | VLDRWT.U32 Qd, [Rn, Qm] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vldrwq_gather_offset_z[_u32]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t const *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | p -> Rp | VLDRWT.U32 Qd, [Rn, Qm] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vldrwq_gather_offset_z[_f32]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| float32_t const *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | p -> Rp | VLDRWT.U32 Qd, [Rn, Qm] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vldrwq_gather_shifted_offset[_s32]( | base -> Rn | VLDRW.U32 Qd, [Rn, Qm, UXTW #2] | Qd -> result | |
| int32_t const *base, | offset -> Qm | | | |
| uint32x4_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vldrwq_gather_shifted_offset[_u32]( | base -> Rn | VLDRW.U32 Qd, [Rn, Qm, UXTW #2] | Qd -> result | |
| uint32_t const *base, | offset -> Qm | | | |
| uint32x4_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vldrwq_gather_shifted_offset[_f32]( | base -> Rn | VLDRW.U32 Qd, [Rn, Qm, UXTW #2] | Qd -> result | |
| float32_t const *base, | offset -> Qm | | | |
| uint32x4_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vldrwq_gather_shifted_offset_z[_s32]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| int32_t const *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | p -> Rp | VLDRWT.U32 Qd, [Rn, Qm, UXTW #2] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vldrwq_gather_shifted_offset_z[_u32]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| uint32_t const *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | p -> Rp | VLDRWT.U32 Qd, [Rn, Qm, UXTW #2] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vldrwq_gather_shifted_offset_z[_f32]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| float32_t const *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | p -> Rp | VLDRWT.U32 Qd, [Rn, Qm, UXTW #2] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vldrwq_gather_base_s32( | addr -> Qn | VLDRW.U32 Qd, [Qn, #offset] | Qd -> result | |
| uint32x4_t addr, | offset in +/-4*[0..127] | | | |
| const int offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vldrwq_gather_base_u32( | addr -> Qn | VLDRW.U32 Qd, [Qn, #offset] | Qd -> result | |
| uint32x4_t addr, | offset in +/-4*[0..127] | | | |
| const int offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vldrwq_gather_base_f32( | addr -> Qn | VLDRW.U32 Qd, [Qn, #offset] | Qd -> result | |
| uint32x4_t addr, | offset in +/-4*[0..127] | | | |
| const int offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vldrwq_gather_base_z_s32( | addr -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t addr, | offset in +/-4*[0..127] | VPST | | |
| const int offset, | p -> Rp | VLDRWT.U32 Qd, [Qn, #offset] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vldrwq_gather_base_z_u32( | addr -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t addr, | offset in +/-4*[0..127] | VPST | | |
| const int offset, | p -> Rp | VLDRWT.U32 Qd, [Qn, #offset] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vldrwq_gather_base_z_f32( | addr -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t addr, | offset in +/-4*[0..127] | VPST | | |
| const int offset, | p -> Rp | VLDRWT.U32 Qd, [Qn, #offset] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vldrwq_gather_base_wb_s32( | *addr -> Qn | VLDRW.U32 Qd, [Qn, #offset]! | Qd -> result | |
| uint32x4_t *addr, | offset in +/-4*[0..127] | | Qn -> *addr | |
| const int offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vldrwq_gather_base_wb_u32( | *addr -> Qn | VLDRW.U32 Qd, [Qn, #offset]! | Qd -> result | |
| uint32x4_t *addr, | offset in +/-4*[0..127] | | Qn -> *addr | |
| const int offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vldrwq_gather_base_wb_f32( | *addr -> Qn | VLDRW.U32 Qd, [Qn, #offset]! | Qd -> result | |
| uint32x4_t *addr, | offset in +/-4*[0..127] | | Qn -> *addr | |
| const int offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vldrwq_gather_base_wb_z_s32( | *addr -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t *addr, | offset in +/-4*[0..127] | VPST | Qn -> *addr | |
| const int offset, | p -> Rp | VLDRWT.U32 Qd, [Qn, #offset]! | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vldrwq_gather_base_wb_z_u32( | *addr -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t *addr, | offset in +/-4*[0..127] | VPST | Qn -> *addr | |
| const int offset, | p -> Rp | VLDRWT.U32 Qd, [Qn, #offset]! | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vldrwq_gather_base_wb_z_f32( | *addr -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t *addr, | offset in +/-4*[0..127] | VPST | Qn -> *addr | |
| const int offset, | p -> Rp | VLDRWT.U32 Qd, [Qn, #offset]! | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vldrdq_gather_offset[_s64]( | base -> Rn | VLDRD.U64 Qd, [Rn, Qm] | Qd -> result | |
| int64_t const *base, | offset -> Qm | | | |
| uint64x2_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64x2_t [__arm_]vldrdq_gather_offset[_u64]( | base -> Rn | VLDRD.U64 Qd, [Rn, Qm] | Qd -> result | |
| uint64_t const *base, | offset -> Qm | | | |
| uint64x2_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vldrdq_gather_offset_z[_s64]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| int64_t const *base, | offset -> Qm | VPST | | |
| uint64x2_t offset, | p -> Rp | VLDRDT.U64 Qd, [Rn, Qm] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64x2_t [__arm_]vldrdq_gather_offset_z[_u64]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| uint64_t const *base, | offset -> Qm | VPST | | |
| uint64x2_t offset, | p -> Rp | VLDRDT.U64 Qd, [Rn, Qm] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vldrdq_gather_shifted_offset[_s64]( | base -> Rn | VLDRD.U64 Qd, [Rn, Qm, UXTW #3] | Qd -> result | |
| int64_t const *base, | offset -> Qm | | | |
| uint64x2_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64x2_t [__arm_]vldrdq_gather_shifted_offset[_u64]( | base -> Rn | VLDRD.U64 Qd, [Rn, Qm, UXTW #3] | Qd -> result | |
| uint64_t const *base, | offset -> Qm | | | |
| uint64x2_t offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vldrdq_gather_shifted_offset_z[_s64]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| int64_t const *base, | offset -> Qm | VPST | | |
| uint64x2_t offset, | p -> Rp | VLDRDT.U64 Qd, [Rn, Qm, UXTW #3] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64x2_t [__arm_]vldrdq_gather_shifted_offset_z[_u64]( | base -> Rn | VMSR P0, Rp | Qd -> result | |
| uint64_t const *base, | offset -> Qm | VPST | | |
| uint64x2_t offset, | p -> Rp | VLDRDT.U64 Qd, [Rn, Qm, UXTW #3] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vldrdq_gather_base_s64( | addr -> Qn | VLDRD.64 Qd, [Qn, #offset] | Qd -> result | |
| uint64x2_t addr, | offset in +/-8*[0..127] | | | |
| const int offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64x2_t [__arm_]vldrdq_gather_base_u64( | addr -> Qn | VLDRD.64 Qd, [Qn, #offset] | Qd -> result | |
| uint64x2_t addr, | offset in +/-8*[0..127] | | | |
| const int offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vldrdq_gather_base_z_s64( | addr -> Qn | VMSR P0, Rp | Qd -> result | |
| uint64x2_t addr, | offset in +/-8*[0..127] | VPST | | |
| const int offset, | p -> Rp | VLDRDT.U64 Qd, [Qn, #offset] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64x2_t [__arm_]vldrdq_gather_base_z_u64( | addr -> Qn | VMSR P0, Rp | Qd -> result | |
| uint64x2_t addr, | offset in +/-8*[0..127] | VPST | | |
| const int offset, | p -> Rp | VLDRDT.U64 Qd, [Qn, #offset] | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vldrdq_gather_base_wb_s64( | *addr -> Qn | VLDRD.64 Qd, [Qn, #offset]! | Qd -> result | |
| uint64x2_t *addr, | offset in +/-8*[0..127] | | Qn -> *addr | |
| const int offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64x2_t [__arm_]vldrdq_gather_base_wb_u64( | *addr -> Qn | VLDRD.64 Qd, [Qn, #offset]! | Qd -> result | |
| uint64x2_t *addr, | offset in +/-8*[0..127] | | Qn -> *addr | |
| const int offset) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vldrdq_gather_base_wb_z_s64( | *addr -> Qn | VMSR P0, Rp | Qd -> result | |
| uint64x2_t *addr, | offset in +/-8*[0..127] | VPST | Qn -> *addr | |
| const int offset, | p -> Rp | VLDRDT.U64 Qd, [Qn, #offset]! | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64x2_t [__arm_]vldrdq_gather_base_wb_z_u64( | *addr -> Qn | VMSR P0, Rp | Qd -> result | |
| uint64x2_t *addr, | offset in +/-8*[0..127] | VPST | Qn -> *addr | |
| const int offset, | p -> Rp | VLDRDT.U64 Qd, [Qn, #offset]! | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------------+------------------------------+--------------------------------------+-------------------+---------------------------+
Store
=====
Stride
~~~~~~
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+=================================+==========================+================================+==========+===========================+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst2q[_s8]( | addr -> Rn | VST20.8 {Qd - Qd2}, [Rn] | | |
| int8_t *addr, | value.val[0] -> Qd | VST21.8 {Qd - Qd2}, [Rn] | | |
| int8x16x2_t value) | value.val[1] -> Qd2 | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst2q[_s16]( | addr -> Rn | VST20.16 {Qd - Qd2}, [Rn] | | |
| int16_t *addr, | value.val[0] -> Qd | VST21.16 {Qd - Qd2}, [Rn] | | |
| int16x8x2_t value) | value.val[1] -> Qd2 | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst2q[_s32]( | addr -> Rn | VST20.32 {Qd - Qd2}, [Rn] | | |
| int32_t *addr, | value.val[0] -> Qd | VST21.32 {Qd - Qd2}, [Rn] | | |
| int32x4x2_t value) | value.val[1] -> Qd2 | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst2q[_u8]( | addr -> Rn | VST20.8 {Qd - Qd2}, [Rn] | | |
| uint8_t *addr, | value.val[0] -> Qd | VST21.8 {Qd - Qd2}, [Rn] | | |
| uint8x16x2_t value) | value.val[1] -> Qd2 | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst2q[_u16]( | addr -> Rn | VST20.16 {Qd - Qd2}, [Rn] | | |
| uint16_t *addr, | value.val[0] -> Qd | VST21.16 {Qd - Qd2}, [Rn] | | |
| uint16x8x2_t value) | value.val[1] -> Qd2 | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst2q[_u32]( | addr -> Rn | VST20.32 {Qd - Qd2}, [Rn] | | |
| uint32_t *addr, | value.val[0] -> Qd | VST21.32 {Qd - Qd2}, [Rn] | | |
| uint32x4x2_t value) | value.val[1] -> Qd2 | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst2q[_f16]( | addr -> Rn | VST20.16 {Qd - Qd2}, [Rn] | | |
| float16_t *addr, | value.val[0] -> Qd | VST21.16 {Qd - Qd2}, [Rn] | | |
| float16x8x2_t value) | value.val[1] -> Qd2 | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst2q[_f32]( | addr -> Rn | VST20.32 {Qd - Qd2}, [Rn] | | |
| float32_t *addr, | value.val[0] -> Qd | VST21.32 {Qd - Qd2}, [Rn] | | |
| float32x4x2_t value) | value.val[1] -> Qd2 | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst4q[_s8]( | addr -> Rn | VST40.8 {Qd - Qd4}, [Rn] | | |
| int8_t *addr, | value.val[0] -> Qd | VST41.8 {Qd - Qd4}, [Rn] | | |
| int8x16x4_t value) | value.val[1] -> Qd2 | VST42.8 {Qd - Qd4}, [Rn] | | |
| | value.val[2] -> Qd3 | VST43.8 {Qd - Qd4}, [Rn] | | |
| | value.val[3] -> Qd4 | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst4q[_s16]( | addr -> Rn | VST40.16 {Qd - Qd4}, [Rn] | | |
| int16_t *addr, | value.val[0] -> Qd | VST41.16 {Qd - Qd4}, [Rn] | | |
| int16x8x4_t value) | value.val[1] -> Qd2 | VST42.16 {Qd - Qd4}, [Rn] | | |
| | value.val[2] -> Qd3 | VST43.16 {Qd - Qd4}, [Rn] | | |
| | value.val[3] -> Qd4 | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst4q[_s32]( | addr -> Rn | VST40.32 {Qd - Qd4}, [Rn] | | |
| int32_t *addr, | value.val[0] -> Qd | VST41.32 {Qd - Qd4}, [Rn] | | |
| int32x4x4_t value) | value.val[1] -> Qd2 | VST42.32 {Qd - Qd4}, [Rn] | | |
| | value.val[2] -> Qd3 | VST43.32 {Qd - Qd4}, [Rn] | | |
| | value.val[3] -> Qd4 | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst4q[_u8]( | addr -> Rn | VST40.8 {Qd - Qd4}, [Rn] | | |
| uint8_t *addr, | value.val[0] -> Qd | VST41.8 {Qd - Qd4}, [Rn] | | |
| uint8x16x4_t value) | value.val[1] -> Qd2 | VST42.8 {Qd - Qd4}, [Rn] | | |
| | value.val[2] -> Qd3 | VST43.8 {Qd - Qd4}, [Rn] | | |
| | value.val[3] -> Qd4 | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst4q[_u16]( | addr -> Rn | VST40.16 {Qd - Qd4}, [Rn] | | |
| uint16_t *addr, | value.val[0] -> Qd | VST41.16 {Qd - Qd4}, [Rn] | | |
| uint16x8x4_t value) | value.val[1] -> Qd2 | VST42.16 {Qd - Qd4}, [Rn] | | |
| | value.val[2] -> Qd3 | VST43.16 {Qd - Qd4}, [Rn] | | |
| | value.val[3] -> Qd4 | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst4q[_u32]( | addr -> Rn | VST40.32 {Qd - Qd4}, [Rn] | | |
| uint32_t *addr, | value.val[0] -> Qd | VST41.32 {Qd - Qd4}, [Rn] | | |
| uint32x4x4_t value) | value.val[1] -> Qd2 | VST42.32 {Qd - Qd4}, [Rn] | | |
| | value.val[2] -> Qd3 | VST43.32 {Qd - Qd4}, [Rn] | | |
| | value.val[3] -> Qd4 | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst4q[_f16]( | addr -> Rn | VST40.16 {Qd - Qd4}, [Rn] | | |
| float16_t *addr, | value.val[0] -> Qd | VST41.16 {Qd - Qd4}, [Rn] | | |
| float16x8x4_t value) | value.val[1] -> Qd2 | VST42.16 {Qd - Qd4}, [Rn] | | |
| | value.val[2] -> Qd3 | VST43.16 {Qd - Qd4}, [Rn] | | |
| | value.val[3] -> Qd4 | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst4q[_f32]( | addr -> Rn | VST40.32 {Qd - Qd4}, [Rn] | | |
| float32_t *addr, | value.val[0] -> Qd | VST41.32 {Qd - Qd4}, [Rn] | | |
| float32x4x4_t value) | value.val[1] -> Qd2 | VST42.32 {Qd - Qd4}, [Rn] | | |
| | value.val[2] -> Qd3 | VST43.32 {Qd - Qd4}, [Rn] | | |
| | value.val[3] -> Qd4 | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE/NEON`` |
| | | | | |
| void [__arm_]vst1q[_s8]( | base -> Rn | VSTRB.8 Qd, [Rn] | | |
| int8_t *base, | value -> Qd | | | |
| int8x16_t value) | | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE/NEON`` |
| | | | | |
| void [__arm_]vst1q[_s16]( | base -> Rn | VSTRH.16 Qd, [Rn] | | |
| int16_t *base, | value -> Qd | | | |
| int16x8_t value) | | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE/NEON`` |
| | | | | |
| void [__arm_]vst1q[_s32]( | base -> Rn | VSTRW.32 Qd, [Rn] | | |
| int32_t *base, | value -> Qd | | | |
| int32x4_t value) | | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE/NEON`` |
| | | | | |
| void [__arm_]vst1q[_u8]( | base -> Rn | VSTRB.8 Qd, [Rn] | | |
| uint8_t *base, | value -> Qd | | | |
| uint8x16_t value) | | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE/NEON`` |
| | | | | |
| void [__arm_]vst1q[_u16]( | base -> Rn | VSTRH.16 Qd, [Rn] | | |
| uint16_t *base, | value -> Qd | | | |
| uint16x8_t value) | | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE/NEON`` |
| | | | | |
| void [__arm_]vst1q[_u32]( | base -> Rn | VSTRW.32 Qd, [Rn] | | |
| uint32_t *base, | value -> Qd | | | |
| uint32x4_t value) | | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE/NEON`` |
| | | | | |
| void [__arm_]vst1q[_f16]( | base -> Rn | VSTRH.16 Qd, [Rn] | | |
| float16_t *base, | value -> Qd | | | |
| float16x8_t value) | | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE/NEON`` |
| | | | | |
| void [__arm_]vst1q[_f32]( | base -> Rn | VSTRW.32 Qd, [Rn] | | |
| float32_t *base, | value -> Qd | | | |
| float32x4_t value) | | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst1q_p[_s8]( | base -> Rn | VMSR P0, Rp | | |
| int8_t *base, | value -> Qd | VPST | | |
| int8x16_t value, | p -> Rp | VSTRBT.8 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst1q_p[_s16]( | base -> Rn | VMSR P0, Rp | | |
| int16_t *base, | value -> Qd | VPST | | |
| int16x8_t value, | p -> Rp | VSTRHT.16 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst1q_p[_s32]( | base -> Rn | VMSR P0, Rp | | |
| int32_t *base, | value -> Qd | VPST | | |
| int32x4_t value, | p -> Rp | VSTRWT.32 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst1q_p[_u8]( | base -> Rn | VMSR P0, Rp | | |
| uint8_t *base, | value -> Qd | VPST | | |
| uint8x16_t value, | p -> Rp | VSTRBT.8 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst1q_p[_u16]( | base -> Rn | VMSR P0, Rp | | |
| uint16_t *base, | value -> Qd | VPST | | |
| uint16x8_t value, | p -> Rp | VSTRHT.16 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst1q_p[_u32]( | base -> Rn | VMSR P0, Rp | | |
| uint32_t *base, | value -> Qd | VPST | | |
| uint32x4_t value, | p -> Rp | VSTRWT.32 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst1q_p[_f16]( | base -> Rn | VMSR P0, Rp | | |
| float16_t *base, | value -> Qd | VPST | | |
| float16x8_t value, | p -> Rp | VSTRHT.16 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vst1q_p[_f32]( | base -> Rn | VMSR P0, Rp | | |
| float32_t *base, | value -> Qd | VPST | | |
| float32x4_t value, | p -> Rp | VSTRWT.32 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+---------------------------------+--------------------------+--------------------------------+----------+---------------------------+
Consecutive
~~~~~~~~~~~
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+==================================+========================+========================+==========+===========================+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq[_s8]( | base -> Rn | VSTRB.8 Qd, [Rn] | | |
| int8_t *base, | value -> Qd | | | |
| int8x16_t value) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq[_s16]( | base -> Rn | VSTRB.16 Qd, [Rn] | | |
| int8_t *base, | value -> Qd | | | |
| int16x8_t value) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq[_s32]( | base -> Rn | VSTRB.32 Qd, [Rn] | | |
| int8_t *base, | value -> Qd | | | |
| int32x4_t value) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq[_u8]( | base -> Rn | VSTRB.8 Qd, [Rn] | | |
| uint8_t *base, | value -> Qd | | | |
| uint8x16_t value) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq[_u16]( | base -> Rn | VSTRB.16 Qd, [Rn] | | |
| uint8_t *base, | value -> Qd | | | |
| uint16x8_t value) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq[_u32]( | base -> Rn | VSTRB.32 Qd, [Rn] | | |
| uint8_t *base, | value -> Qd | | | |
| uint32x4_t value) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq_p[_s8]( | base -> Rn | VMSR P0, Rp | | |
| int8_t *base, | value -> Qd | VPST | | |
| int8x16_t value, | p -> Rp | VSTRBT.8 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq_p[_s16]( | base -> Rn | VMSR P0, Rp | | |
| int8_t *base, | value -> Qd | VPST | | |
| int16x8_t value, | p -> Rp | VSTRBT.16 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq_p[_s32]( | base -> Rn | VMSR P0, Rp | | |
| int8_t *base, | value -> Qd | VPST | | |
| int32x4_t value, | p -> Rp | VSTRBT.32 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq_p[_u8]( | base -> Rn | VMSR P0, Rp | | |
| uint8_t *base, | value -> Qd | VPST | | |
| uint8x16_t value, | p -> Rp | VSTRBT.8 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq_p[_u16]( | base -> Rn | VMSR P0, Rp | | |
| uint8_t *base, | value -> Qd | VPST | | |
| uint16x8_t value, | p -> Rp | VSTRBT.16 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq_p[_u32]( | base -> Rn | VMSR P0, Rp | | |
| uint8_t *base, | value -> Qd | VPST | | |
| uint32x4_t value, | p -> Rp | VSTRBT.32 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq[_s16]( | base -> Rn | VSTRH.16 Qd, [Rn] | | |
| int16_t *base, | value -> Qd | | | |
| int16x8_t value) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq[_s32]( | base -> Rn | VSTRH.32 Qd, [Rn] | | |
| int16_t *base, | value -> Qd | | | |
| int32x4_t value) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq[_u16]( | base -> Rn | VSTRH.16 Qd, [Rn] | | |
| uint16_t *base, | value -> Qd | | | |
| uint16x8_t value) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq[_u32]( | base -> Rn | VSTRH.32 Qd, [Rn] | | |
| uint16_t *base, | value -> Qd | | | |
| uint32x4_t value) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq[_f16]( | base -> Rn | VSTRH.16 Qd, [Rn] | | |
| float16_t *base, | value -> Qd | | | |
| float16x8_t value) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_p[_s16]( | base -> Rn | VMSR P0, Rp | | |
| int16_t *base, | value -> Qd | VPST | | |
| int16x8_t value, | p -> Rp | VSTRHT.16 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_p[_s32]( | base -> Rn | VMSR P0, Rp | | |
| int16_t *base, | value -> Qd | VPST | | |
| int32x4_t value, | p -> Rp | VSTRHT.32 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_p[_u16]( | base -> Rn | VMSR P0, Rp | | |
| uint16_t *base, | value -> Qd | VPST | | |
| uint16x8_t value, | p -> Rp | VSTRHT.16 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_p[_u32]( | base -> Rn | VMSR P0, Rp | | |
| uint16_t *base, | value -> Qd | VPST | | |
| uint32x4_t value, | p -> Rp | VSTRHT.32 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_p[_f16]( | base -> Rn | VMSR P0, Rp | | |
| float16_t *base, | value -> Qd | VPST | | |
| float16x8_t value, | p -> Rp | VSTRHT.16 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq[_s32]( | base -> Rn | VSTRW.32 Qd, [Rn] | | |
| int32_t *base, | value -> Qd | | | |
| int32x4_t value) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq[_u32]( | base -> Rn | VSTRW.32 Qd, [Rn] | | |
| uint32_t *base, | value -> Qd | | | |
| uint32x4_t value) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq[_f32]( | base -> Rn | VSTRW.32 Qd, [Rn] | | |
| float32_t *base, | value -> Qd | | | |
| float32x4_t value) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_p[_s32]( | base -> Rn | VMSR P0, Rp | | |
| int32_t *base, | value -> Qd | VPST | | |
| int32x4_t value, | p -> Rp | VSTRWT.32 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_p[_u32]( | base -> Rn | VMSR P0, Rp | | |
| uint32_t *base, | value -> Qd | VPST | | |
| uint32x4_t value, | p -> Rp | VSTRWT.32 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_p[_f32]( | base -> Rn | VMSR P0, Rp | | |
| float32_t *base, | value -> Qd | VPST | | |
| float32x4_t value, | p -> Rp | VSTRWT.32 Qd, [Rn] | | |
| mve_pred16_t p) | | | | |
+----------------------------------+------------------------+------------------------+----------+---------------------------+
Scatter
~~~~~~~
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+=========================================================+==============================+=====================================+=================+===========================+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq_scatter_offset[_s8]( | base -> Rn | VSTRB.8 Qd, [Rn, Qm] | | |
| int8_t *base, | offset -> Qm | | | |
| uint8x16_t offset, | value -> Qd | | | |
| int8x16_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq_scatter_offset[_s16]( | base -> Rn | VSTRB.16 Qd, [Rn, Qm] | | |
| int8_t *base, | offset -> Qm | | | |
| uint16x8_t offset, | value -> Qd | | | |
| int16x8_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq_scatter_offset[_s32]( | base -> Rn | VSTRB.32 Qd, [Rn, Qm] | | |
| int8_t *base, | offset -> Qm | | | |
| uint32x4_t offset, | value -> Qd | | | |
| int32x4_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq_scatter_offset[_u8]( | base -> Rn | VSTRB.8 Qd, [Rn, Qm] | | |
| uint8_t *base, | offset -> Qm | | | |
| uint8x16_t offset, | value -> Qd | | | |
| uint8x16_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq_scatter_offset[_u16]( | base -> Rn | VSTRB.16 Qd, [Rn, Qm] | | |
| uint8_t *base, | offset -> Qm | | | |
| uint16x8_t offset, | value -> Qd | | | |
| uint16x8_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq_scatter_offset[_u32]( | base -> Rn | VSTRB.32 Qd, [Rn, Qm] | | |
| uint8_t *base, | offset -> Qm | | | |
| uint32x4_t offset, | value -> Qd | | | |
| uint32x4_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq_scatter_offset_p[_s8]( | base -> Rn | VMSR P0, Rp | | |
| int8_t *base, | offset -> Qm | VPST | | |
| uint8x16_t offset, | value -> Qd | VSTRBT.8 Qd, [Rn, Qm] | | |
| int8x16_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq_scatter_offset_p[_s16]( | base -> Rn | VMSR P0, Rp | | |
| int8_t *base, | offset -> Qm | VPST | | |
| uint16x8_t offset, | value -> Qd | VSTRBT.16 Qd, [Rn, Qm] | | |
| int16x8_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq_scatter_offset_p[_s32]( | base -> Rn | VMSR P0, Rp | | |
| int8_t *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | value -> Qd | VSTRBT.32 Qd, [Rn, Qm] | | |
| int32x4_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq_scatter_offset_p[_u8]( | base -> Rn | VMSR P0, Rp | | |
| uint8_t *base, | offset -> Qm | VPST | | |
| uint8x16_t offset, | value -> Qd | VSTRBT.8 Qd, [Rn, Qm] | | |
| uint8x16_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq_scatter_offset_p[_u16]( | base -> Rn | VMSR P0, Rp | | |
| uint8_t *base, | offset -> Qm | VPST | | |
| uint16x8_t offset, | value -> Qd | VSTRBT.16 Qd, [Rn, Qm] | | |
| uint16x8_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrbq_scatter_offset_p[_u32]( | base -> Rn | VMSR P0, Rp | | |
| uint8_t *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | value -> Qd | VSTRBT.32 Qd, [Rn, Qm] | | |
| uint32x4_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_scatter_offset[_s16]( | base -> Rn | VSTRH.16 Qd, [Rn, Qm] | | |
| int16_t *base, | offset -> Qm | | | |
| uint16x8_t offset, | value -> Qd | | | |
| int16x8_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_scatter_offset[_s32]( | base -> Rn | VSTRH.32 Qd, [Rn, Qm] | | |
| int16_t *base, | offset -> Qm | | | |
| uint32x4_t offset, | value -> Qd | | | |
| int32x4_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_scatter_offset[_u16]( | base -> Rn | VSTRH.16 Qd, [Rn, Qm] | | |
| uint16_t *base, | offset -> Qm | | | |
| uint16x8_t offset, | value -> Qd | | | |
| uint16x8_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_scatter_offset[_u32]( | base -> Rn | VSTRH.32 Qd, [Rn, Qm] | | |
| uint16_t *base, | offset -> Qm | | | |
| uint32x4_t offset, | value -> Qd | | | |
| uint32x4_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_scatter_offset[_f16]( | base -> Rn | VSTRH.16 Qd, [Rn, Qm] | | |
| float16_t *base, | offset -> Qm | | | |
| uint16x8_t offset, | value -> Qd | | | |
| float16x8_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_scatter_offset_p[_s16]( | base -> Rn | VMSR P0, Rp | | |
| int16_t *base, | offset -> Qm | VPST | | |
| uint16x8_t offset, | value -> Qd | VSTRHT.16 Qd, [Rn, Qm] | | |
| int16x8_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_scatter_offset_p[_s32]( | base -> Rn | VMSR P0, Rp | | |
| int16_t *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | value -> Qd | VSTRHT.32 Qd, [Rn, Qm] | | |
| int32x4_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_scatter_offset_p[_u16]( | base -> Rn | VMSR P0, Rp | | |
| uint16_t *base, | offset -> Qm | VPST | | |
| uint16x8_t offset, | value -> Qd | VSTRHT.16 Qd, [Rn, Qm] | | |
| uint16x8_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_scatter_offset_p[_u32]( | base -> Rn | VMSR P0, Rp | | |
| uint16_t *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | value -> Qd | VSTRHT.32 Qd, [Rn, Qm] | | |
| uint32x4_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_scatter_offset_p[_f16]( | base -> Rn | VMSR P0, Rp | | |
| float16_t *base, | offset -> Qm | VPST | | |
| uint16x8_t offset, | value -> Qd | VSTRHT.16 Qd, [Rn, Qm] | | |
| float16x8_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_scatter_shifted_offset[_s16]( | base -> Rn | VSTRH.16 Qd, [Rn, Qm, UXTW #1] | | |
| int16_t *base, | offset -> Qm | | | |
| uint16x8_t offset, | value -> Qd | | | |
| int16x8_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_scatter_shifted_offset[_s32]( | base -> Rn | VSTRH.32 Qd, [Rn, Qm, UXTW #1] | | |
| int16_t *base, | offset -> Qm | | | |
| uint32x4_t offset, | value -> Qd | | | |
| int32x4_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_scatter_shifted_offset[_u16]( | base -> Rn | VSTRH.16 Qd, [Rn, Qm, UXTW #1] | | |
| uint16_t *base, | offset -> Qm | | | |
| uint16x8_t offset, | value -> Qd | | | |
| uint16x8_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_scatter_shifted_offset[_u32]( | base -> Rn | VSTRH.32 Qd, [Rn, Qm, UXTW #1] | | |
| uint16_t *base, | offset -> Qm | | | |
| uint32x4_t offset, | value -> Qd | | | |
| uint32x4_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_scatter_shifted_offset[_f16]( | base -> Rn | VSTRH.16 Qd, [Rn, Qm, UXTW #1] | | |
| float16_t *base, | offset -> Qm | | | |
| uint16x8_t offset, | value -> Qd | | | |
| float16x8_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_scatter_shifted_offset_p[_s16]( | base -> Rn | VMSR P0, Rp | | |
| int16_t *base, | offset -> Qm | VPST | | |
| uint16x8_t offset, | value -> Qd | VSTRHT.16 Qd, [Rn, Qm, UXTW #1] | | |
| int16x8_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_scatter_shifted_offset_p[_s32]( | base -> Rn | VMSR P0, Rp | | |
| int16_t *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | value -> Qd | VSTRHT.32 Qd, [Rn, Qm, UXTW #1] | | |
| int32x4_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_scatter_shifted_offset_p[_u16]( | base -> Rn | VMSR P0, Rp | | |
| uint16_t *base, | offset -> Qm | VPST | | |
| uint16x8_t offset, | value -> Qd | VSTRHT.16 Qd, [Rn, Qm, UXTW #1] | | |
| uint16x8_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_scatter_shifted_offset_p[_u32]( | base -> Rn | VMSR P0, Rp | | |
| uint16_t *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | value -> Qd | VSTRHT.32 Qd, [Rn, Qm, UXTW #1] | | |
| uint32x4_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrhq_scatter_shifted_offset_p[_f16]( | base -> Rn | VMSR P0, Rp | | |
| float16_t *base, | offset -> Qm | VPST | | |
| uint16x8_t offset, | value -> Qd | VSTRHT.16 Qd, [Rn, Qm, UXTW #1] | | |
| float16x8_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_base[_s32]( | addr -> Qn | VSTRW.U32 Qd, [Qn, #offset] | | |
| uint32x4_t addr, | offset in +/-4*[0..127] | | | |
| const int offset, | value -> Qd | | | |
| int32x4_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_base[_u32]( | addr -> Qn | VSTRW.U32 Qd, [Qn, #offset] | | |
| uint32x4_t addr, | offset in +/-4*[0..127] | | | |
| const int offset, | value -> Qd | | | |
| uint32x4_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_base[_f32]( | addr -> Qn | VSTRW.U32 Qd, [Qn, #offset] | | |
| uint32x4_t addr, | offset in +/-4*[0..127] | | | |
| const int offset, | value -> Qd | | | |
| float32x4_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_base_p[_s32]( | addr -> Qn | VMSR P0, Rp | | |
| uint32x4_t addr, | offset in +/-4*[0..127] | VPST | | |
| const int offset, | value -> Qd | VSTRWT.U32 Qd, [Qn, #offset] | | |
| int32x4_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_base_p[_u32]( | addr -> Qn | VMSR P0, Rp | | |
| uint32x4_t addr, | offset in +/-4*[0..127] | VPST | | |
| const int offset, | value -> Qd | VSTRWT.U32 Qd, [Qn, #offset] | | |
| uint32x4_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_base_p[_f32]( | addr -> Qn | VMSR P0, Rp | | |
| uint32x4_t addr, | offset in +/-4*[0..127] | VPST | | |
| const int offset, | value -> Qd | VSTRWT.U32 Qd, [Qn, #offset] | | |
| float32x4_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_base_wb[_s32]( | *addr -> Qn | VSTRW.U32 Qd, [Qn, #offset]! | Qn -> *addr | |
| uint32x4_t *addr, | offset in +/-4*[0..127] | | | |
| const int offset, | value -> Qd | | | |
| int32x4_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_base_wb[_u32]( | *addr -> Qn | VSTRW.U32 Qd, [Qn, #offset]! | Qn -> *addr | |
| uint32x4_t *addr, | offset in +/-4*[0..127] | | | |
| const int offset, | value -> Qd | | | |
| uint32x4_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_base_wb[_f32]( | *addr -> Qn | VSTRW.U32 Qd, [Qn, #offset]! | Qn -> *addr | |
| uint32x4_t *addr, | offset in +/-4*[0..127] | | | |
| const int offset, | value -> Qd | | | |
| float32x4_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_base_wb_p[_s32]( | *addr -> Qn | VMSR P0, Rp | Qn -> *addr | |
| uint32x4_t *addr, | offset in +/-4*[0..127] | VPST | | |
| const int offset, | value -> Qd | VSTRWT.U32 Qd, [Qn, #offset]! | | |
| int32x4_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_base_wb_p[_u32]( | *addr -> Qn | VMSR P0, Rp | Qn -> *addr | |
| uint32x4_t *addr, | offset in +/-4*[0..127] | VPST | | |
| const int offset, | value -> Qd | VSTRWT.U32 Qd, [Qn, #offset]! | | |
| uint32x4_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_base_wb_p[_f32]( | *addr -> Qn | VMSR P0, Rp | Qn -> *addr | |
| uint32x4_t *addr, | offset in +/-4*[0..127] | VPST | | |
| const int offset, | value -> Qd | VSTRWT.U32 Qd, [Qn, #offset]! | | |
| float32x4_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_offset[_s32]( | base -> Rn | VSTRW.32 Qd, [Rn, Qm] | | |
| int32_t *base, | offset -> Qm | | | |
| uint32x4_t offset, | value -> Qd | | | |
| int32x4_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_offset[_u32]( | base -> Rn | VSTRW.32 Qd, [Rn, Qm] | | |
| uint32_t *base, | offset -> Qm | | | |
| uint32x4_t offset, | value -> Qd | | | |
| uint32x4_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_offset[_f32]( | base -> Rn | VSTRW.32 Qd, [Rn, Qm] | | |
| float32_t *base, | offset -> Qm | | | |
| uint32x4_t offset, | value -> Qd | | | |
| float32x4_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_offset_p[_s32]( | base -> Rn | VMSR P0, Rp | | |
| int32_t *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | value -> Qd | VSTRWT.32 Qd, [Rn, Qm] | | |
| int32x4_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_offset_p[_u32]( | base -> Rn | VMSR P0, Rp | | |
| uint32_t *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | value -> Qd | VSTRWT.32 Qd, [Rn, Qm] | | |
| uint32x4_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_offset_p[_f32]( | base -> Rn | VMSR P0, Rp | | |
| float32_t *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | value -> Qd | VSTRWT.32 Qd, [Rn, Qm] | | |
| float32x4_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_shifted_offset[_s32]( | base -> Rn | VSTRW.32 Qd, [Rn, Qm, UXTW #2] | | |
| int32_t *base, | offset -> Qm | | | |
| uint32x4_t offset, | value -> Qd | | | |
| int32x4_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_shifted_offset[_u32]( | base -> Rn | VSTRW.32 Qd, [Rn, Qm, UXTW #2] | | |
| uint32_t *base, | offset -> Qm | | | |
| uint32x4_t offset, | value -> Qd | | | |
| uint32x4_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_shifted_offset[_f32]( | base -> Rn | VSTRW.32 Qd, [Rn, Qm, UXTW #2] | | |
| float32_t *base, | offset -> Qm | | | |
| uint32x4_t offset, | value -> Qd | | | |
| float32x4_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_shifted_offset_p[_s32]( | base -> Rn | VMSR P0, Rp | | |
| int32_t *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | value -> Qd | VSTRWT.32 Qd, [Rn, Qm, UXTW #2] | | |
| int32x4_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_shifted_offset_p[_u32]( | base -> Rn | VMSR P0, Rp | | |
| uint32_t *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | value -> Qd | VSTRWT.32 Qd, [Rn, Qm, UXTW #2] | | |
| uint32x4_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrwq_scatter_shifted_offset_p[_f32]( | base -> Rn | VMSR P0, Rp | | |
| float32_t *base, | offset -> Qm | VPST | | |
| uint32x4_t offset, | value -> Qd | VSTRWT.32 Qd, [Rn, Qm, UXTW #2] | | |
| float32x4_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrdq_scatter_base[_s64]( | addr -> Qn | VSTRD.U64 Qd, [Qn, #offset] | | |
| uint64x2_t addr, | offset in +/-8*[0..127] | | | |
| const int offset, | value -> Qd | | | |
| int64x2_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrdq_scatter_base[_u64]( | addr -> Qn | VSTRD.U64 Qd, [Qn, #offset] | | |
| uint64x2_t addr, | offset in +/-8*[0..127] | | | |
| const int offset, | value -> Qd | | | |
| uint64x2_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrdq_scatter_base_p[_s64]( | addr -> Qn | VMSR P0, Rp | | |
| uint64x2_t addr, | offset in +/-8*[0..127] | VPST | | |
| const int offset, | value -> Qd | VSTRDT.U64 Qd, [Qn, #offset] | | |
| int64x2_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrdq_scatter_base_p[_u64]( | addr -> Qn | VMSR P0, Rp | | |
| uint64x2_t addr, | offset in +/-8*[0..127] | VPST | | |
| const int offset, | value -> Qd | VSTRDT.U64 Qd, [Qn, #offset] | | |
| uint64x2_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| void [__arm_]vstrdq_scatter_base_wb[_s64]( | *addr -> Qn | VSTRD.U64 Qd, [Qn, #offset]! | Qn -> *addr | |
| uint64x2_t *addr, | offset in +/-8*[0..127] | | | |
| const int offset, | value -> Qd | | | |
| int64x2_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| void [__arm_]vstrdq_scatter_base_wb[_u64]( | *addr -> Qn | VSTRD.U64 Qd, [Qn, #offset]! | Qn -> *addr | |
| uint64x2_t *addr, | offset in +/-8*[0..127] | | | |
| const int offset, | value -> Qd | | | |
| uint64x2_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| void [__arm_]vstrdq_scatter_base_wb_p[_s64]( | *addr -> Qn | VMSR P0, Rp | Qn -> *addr | |
| uint64x2_t *addr, | offset in +/-8*[0..127] | VPST | | |
| const int offset, | value -> Qd | VSTRDT.U64 Qd, [Qn, #offset]! | | |
| int64x2_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| void [__arm_]vstrdq_scatter_base_wb_p[_u64]( | *addr -> Qn | VMSR P0, Rp | Qn -> *addr | |
| uint64x2_t *addr, | offset in +/-8*[0..127] | VPST | | |
| const int offset, | value -> Qd | VSTRDT.U64 Qd, [Qn, #offset]! | | |
| uint64x2_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrdq_scatter_offset[_s64]( | base -> Rn | VSTRD.64 Qd, [Rn, Qm] | | |
| int64_t *base, | offset -> Qm | | | |
| uint64x2_t offset, | value -> Qd | | | |
| int64x2_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrdq_scatter_offset[_u64]( | base -> Rn | VSTRD.64 Qd, [Rn, Qm] | | |
| uint64_t *base, | offset -> Qm | | | |
| uint64x2_t offset, | value -> Qd | | | |
| uint64x2_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrdq_scatter_offset_p[_s64]( | base -> Rn | VMSR P0, Rp | | |
| int64_t *base, | offset -> Qm | VPST | | |
| uint64x2_t offset, | value -> Qd | VSTRDT.64 Qd, [Rn, Qm] | | |
| int64x2_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrdq_scatter_offset_p[_u64]( | base -> Rn | VMSR P0, Rp | | |
| uint64_t *base, | offset -> Qm | VPST | | |
| uint64x2_t offset, | value -> Qd | VSTRDT.64 Qd, [Rn, Qm] | | |
| uint64x2_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrdq_scatter_shifted_offset[_s64]( | base -> Rn | VSTRD.64 Qd, [Rn, Qm, UXTW #3] | | |
| int64_t *base, | offset -> Qm | | | |
| uint64x2_t offset, | value -> Qd | | | |
| int64x2_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrdq_scatter_shifted_offset[_u64]( | base -> Rn | VSTRD.64 Qd, [Rn, Qm, UXTW #3] | | |
| uint64_t *base, | offset -> Qm | | | |
| uint64x2_t offset, | value -> Qd | | | |
| uint64x2_t value) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrdq_scatter_shifted_offset_p[_s64]( | base -> Rn | VMSR P0, Rp | | |
| int64_t *base, | offset -> Qm | VPST | | |
| uint64x2_t offset, | value -> Qd | VSTRDT.64 Qd, [Rn, Qm, UXTW #3] | | |
| int64x2_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
| .. code:: c | :: | :: | | ``MVE`` |
| | | | | |
| void [__arm_]vstrdq_scatter_shifted_offset_p[_u64]( | base -> Rn | VMSR P0, Rp | | |
| uint64_t *base, | offset -> Qm | VPST | | |
| uint64x2_t offset, | value -> Qd | VSTRDT.64 Qd, [Rn, Qm, UXTW #3] | | |
| uint64x2_t value, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------------------------+------------------------------+-------------------------------------+-----------------+---------------------------+
Data type conversion
====================
Conversions
~~~~~~~~~~~
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+=======================================================+========================+================================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vcvtaq_s16_f16(float16x8_t a) | a -> Qm | VCVTA.S16.F16 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vcvtaq_s32_f32(float32x4_t a) | a -> Qm | VCVTA.S32.F32 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vcvtaq_u16_f16(float16x8_t a) | a -> Qm | VCVTA.U16.F16 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vcvtaq_u32_f32(float32x4_t a) | a -> Qm | VCVTA.U32.F32 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vcvtaq_m[_s16_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VCVTAT.S16.F16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vcvtaq_m[_s32_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | p -> Rp | VCVTAT.S32.F32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vcvtaq_m[_u16_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VCVTAT.U16.F16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vcvtaq_m[_u32_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | p -> Rp | VCVTAT.U32.F32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vcvtaq_x_s16_f16( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTAT.S16.F16 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vcvtaq_x_s32_f32( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTAT.S32.F32 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vcvtaq_x_u16_f16( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTAT.U16.F16 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vcvtaq_x_u32_f32( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTAT.U32.F32 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vcvtnq_s16_f16(float16x8_t a) | a -> Qm | VCVTN.S16.F16 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vcvtnq_s32_f32(float32x4_t a) | a -> Qm | VCVTN.S32.F32 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vcvtnq_u16_f16(float16x8_t a) | a -> Qm | VCVTN.U16.F16 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vcvtnq_u32_f32(float32x4_t a) | a -> Qm | VCVTN.U32.F32 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vcvtnq_m[_s16_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VCVTNT.S16.F16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vcvtnq_m[_s32_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | p -> Rp | VCVTNT.S32.F32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vcvtnq_m[_u16_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VCVTNT.U16.F16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vcvtnq_m[_u32_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | p -> Rp | VCVTNT.U32.F32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vcvtnq_x_s16_f16( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTNT.S16.F16 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vcvtnq_x_s32_f32( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTNT.S32.F32 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vcvtnq_x_u16_f16( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTNT.U16.F16 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vcvtnq_x_u32_f32( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTNT.U32.F32 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vcvtpq_s16_f16(float16x8_t a) | a -> Qm | VCVTP.S16.F16 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vcvtpq_s32_f32(float32x4_t a) | a -> Qm | VCVTP.S32.F32 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vcvtpq_u16_f16(float16x8_t a) | a -> Qm | VCVTP.U16.F16 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vcvtpq_u32_f32(float32x4_t a) | a -> Qm | VCVTP.U32.F32 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vcvtpq_m[_s16_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VCVTPT.S16.F16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vcvtpq_m[_s32_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | p -> Rp | VCVTPT.S32.F32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vcvtpq_m[_u16_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VCVTPT.U16.F16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vcvtpq_m[_u32_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | p -> Rp | VCVTPT.U32.F32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vcvtpq_x_s16_f16( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTPT.S16.F16 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vcvtpq_x_s32_f32( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTPT.S32.F32 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vcvtpq_x_u16_f16( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTPT.U16.F16 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vcvtpq_x_u32_f32( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTPT.U32.F32 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vcvtmq_s16_f16(float16x8_t a) | a -> Qm | VCVTM.S16.F16 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vcvtmq_s32_f32(float32x4_t a) | a -> Qm | VCVTM.S32.F32 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vcvtmq_u16_f16(float16x8_t a) | a -> Qm | VCVTM.U16.F16 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vcvtmq_u32_f32(float32x4_t a) | a -> Qm | VCVTM.U32.F32 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vcvtmq_m[_s16_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VCVTMT.S16.F16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vcvtmq_m[_s32_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | p -> Rp | VCVTMT.S32.F32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vcvtmq_m[_u16_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VCVTMT.U16.F16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vcvtmq_m[_u32_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | p -> Rp | VCVTMT.U32.F32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vcvtmq_x_s16_f16( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTMT.S16.F16 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vcvtmq_x_s32_f32( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTMT.S32.F32 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vcvtmq_x_u16_f16( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTMT.U16.F16 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vcvtmq_x_u32_f32( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTMT.U32.F32 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcvtbq_f16_f32( | a -> Qd | VCVTB.F16.F32 Qd, Qm | Qd -> result | |
| float16x8_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcvtbq_f32_f16(float16x8_t a) | a -> Qm | VCVTB.F32.F16 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcvtbq_m_f16_f32( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VCVTBT.F16.F32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcvtbq_m_f32_f16( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VCVTBT.F32.F16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcvtbq_x_f32_f16( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTBT.F32.F16 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcvttq_f16_f32( | a -> Qd | VCVTT.F16.F32 Qd, Qm | Qd -> result | |
| float16x8_t a, | b -> Qm | | | |
| float32x4_t b) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcvttq_f32_f16(float16x8_t a) | a -> Qm | VCVTT.F32.F16 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcvttq_m_f16_f32( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Qm | VPST | | |
| float32x4_t b, | p -> Rp | VCVTTT.F16.F32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcvttq_m_f32_f16( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VCVTTT.F32.F16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcvttq_x_f32_f16( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTTT.F32.F16 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vcvtq[_f16_s16](int16x8_t a) | a -> Qm | VCVT.F16.S16 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vcvtq[_f16_u16](uint16x8_t a) | a -> Qm | VCVT.F16.U16 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vcvtq[_f32_s32](int32x4_t a) | a -> Qm | VCVT.F32.S32 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vcvtq[_f32_u32](uint32x4_t a) | a -> Qm | VCVT.F32.U32 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcvtq_m[_f16_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | p -> Rp | VCVTT.F16.S16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcvtq_m[_f16_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qm | VPST | | |
| uint16x8_t a, | p -> Rp | VCVTT.F16.U16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcvtq_m[_f32_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qm | VPST | | |
| int32x4_t a, | p -> Rp | VCVTT.F32.S32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcvtq_m[_f32_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qm | VPST | | |
| uint32x4_t a, | p -> Rp | VCVTT.F32.U32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcvtq_x[_f16_u16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTT.F16.U16 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcvtq_x[_f16_s16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTT.F16.S16 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcvtq_x[_f32_s32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTT.F32.S32 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcvtq_x[_f32_u32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTT.F32.U32 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vcvtq_n[_f16_s16]( | a -> Qm | VCVT.F16.S16 Qd, Qm, imm6 | Qd -> result | |
| int16x8_t a, | 1 <= imm6 <= 16 | | | |
| const int imm6) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vcvtq_n[_f16_u16]( | a -> Qm | VCVT.F16.U16 Qd, Qm, imm6 | Qd -> result | |
| uint16x8_t a, | 1 <= imm6 <= 16 | | | |
| const int imm6) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vcvtq_n[_f32_s32]( | a -> Qm | VCVT.F32.S32 Qd, Qm, imm6 | Qd -> result | |
| int32x4_t a, | 1 <= imm6 <= 32 | | | |
| const int imm6) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vcvtq_n[_f32_u32]( | a -> Qm | VCVT.F32.U32 Qd, Qm, imm6 | Qd -> result | |
| uint32x4_t a, | 1 <= imm6 <= 32 | | | |
| const int imm6) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcvtq_m_n[_f16_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | 1 <= imm6 <= 16 | VCVTT.F16.S16 Qd, Qm, imm6 | | |
| const int imm6, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcvtq_m_n[_f16_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qm | VPST | | |
| uint16x8_t a, | 1 <= imm6 <= 16 | VCVTT.F16.U16 Qd, Qm, imm6 | | |
| const int imm6, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcvtq_m_n[_f32_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qm | VPST | | |
| int32x4_t a, | 1 <= imm6 <= 32 | VCVTT.F32.S32 Qd, Qm, imm6 | | |
| const int imm6, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcvtq_m_n[_f32_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qm | VPST | | |
| uint32x4_t a, | 1 <= imm6 <= 32 | VCVTT.F32.U32 Qd, Qm, imm6 | | |
| const int imm6, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcvtq_x_n[_f16_s16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | 1 <= imm6 <= 16 | VPST | | |
| const int imm6, | p -> Rp | VCVTT.F16.S16 Qd, Qm, imm6 | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vcvtq_x_n[_f16_u16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | 1 <= imm6 <= 16 | VPST | | |
| const int imm6, | p -> Rp | VCVTT.F16.U16 Qd, Qm, imm6 | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcvtq_x_n[_f32_s32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | 1 <= imm6 <= 32 | VPST | | |
| const int imm6, | p -> Rp | VCVTT.F32.S32 Qd, Qm, imm6 | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vcvtq_x_n[_f32_u32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | 1 <= imm6 <= 32 | VPST | | |
| const int imm6, | p -> Rp | VCVTT.F32.U32 Qd, Qm, imm6 | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vcvtq_s16_f16(float16x8_t a) | a -> Qm | VCVT.S16.F16 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vcvtq_s32_f32(float32x4_t a) | a -> Qm | VCVT.S32.F32 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vcvtq_u16_f16(float16x8_t a) | a -> Qm | VCVT.U16.F16 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vcvtq_u32_f32(float32x4_t a) | a -> Qm | VCVT.U32.F32 Qd, Qm | Qd -> result | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vcvtq_m[_s16_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VCVTT.S16.F16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vcvtq_m[_s32_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | p -> Rp | VCVTT.S32.F32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vcvtq_m[_u16_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | p -> Rp | VCVTT.U16.F16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vcvtq_m[_u32_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | p -> Rp | VCVTT.U32.F32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vcvtq_x_s16_f16( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTT.S16.F16 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vcvtq_x_s32_f32( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTT.S32.F32 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vcvtq_x_u16_f16( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTT.U16.F16 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vcvtq_x_u32_f32( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCVTT.U32.F32 Qd, Qm | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vcvtq_n_s16_f16( | a -> Qm | VCVT.S16.F16 Qd, Qm, imm6 | Qd -> result | |
| float16x8_t a, | 1 <= imm6 <= 16 | | | |
| const int imm6) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vcvtq_n_s32_f32( | a -> Qm | VCVT.S32.F32 Qd, Qm, imm6 | Qd -> result | |
| float32x4_t a, | 1 <= imm6 <= 32 | | | |
| const int imm6) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vcvtq_n_u16_f16( | a -> Qm | VCVT.U16.F16 Qd, Qm, imm6 | Qd -> result | |
| float16x8_t a, | 1 <= imm6 <= 16 | | | |
| const int imm6) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vcvtq_n_u32_f32( | a -> Qm | VCVT.U32.F32 Qd, Qm, imm6 | Qd -> result | |
| float32x4_t a, | 1 <= imm6 <= 32 | | | |
| const int imm6) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vcvtq_m_n[_s16_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | 1 <= imm6 <= 16 | VCVTT.S16.F16 Qd, Qm, imm6 | | |
| const int imm6, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vcvtq_m_n[_s32_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | 1 <= imm6 <= 32 | VCVTT.S32.F32 Qd, Qm, imm6 | | |
| const int imm6, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vcvtq_m_n[_u16_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| float16x8_t a, | 1 <= imm6 <= 16 | VCVTT.U16.F16 Qd, Qm, imm6 | | |
| const int imm6, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vcvtq_m_n[_u32_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| float32x4_t a, | 1 <= imm6 <= 32 | VCVTT.U32.F32 Qd, Qm, imm6 | | |
| const int imm6, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vcvtq_x_n_s16_f16( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | 1 <= imm6 <= 16 | VPST | | |
| const int imm6, | p -> Rp | VCVTT.S16.F16 Qd, Qm, imm6 | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vcvtq_x_n_s32_f32( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | 1 <= imm6 <= 32 | VPST | | |
| const int imm6, | p -> Rp | VCVTT.S32.F32 Qd, Qm, imm6 | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vcvtq_x_n_u16_f16( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | 1 <= imm6 <= 16 | VPST | | |
| const int imm6, | p -> Rp | VCVTT.U16.F16 Qd, Qm, imm6 | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vcvtq_x_n_u32_f32( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | 1 <= imm6 <= 32 | VPST | | |
| const int imm6, | p -> Rp | VCVTT.U32.F32 Qd, Qm, imm6 | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------------------+------------------------+--------------------------------+------------------+---------------------------+
Reinterpret casts
~~~~~~~~~~~~~~~~~
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+================================================================+========================+===============+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vreinterpretq_s16[_s8](int8x16_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vreinterpretq_s32[_s8](int8x16_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vreinterpretq_f32[_s8](int8x16_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vreinterpretq_u8[_s8](int8x16_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vreinterpretq_u16[_s8](int8x16_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vreinterpretq_u32[_s8](int8x16_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint64x2_t [__arm_]vreinterpretq_u64[_s8](int8x16_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int64x2_t [__arm_]vreinterpretq_s64[_s8](int8x16_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vreinterpretq_f16[_s8](int8x16_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vreinterpretq_s8[_s16](int16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vreinterpretq_s32[_s16](int16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vreinterpretq_f32[_s16](int16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vreinterpretq_u8[_s16](int16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vreinterpretq_u16[_s16](int16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vreinterpretq_u32[_s16](int16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint64x2_t [__arm_]vreinterpretq_u64[_s16](int16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int64x2_t [__arm_]vreinterpretq_s64[_s16](int16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vreinterpretq_f16[_s16](int16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vreinterpretq_s8[_s32](int32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vreinterpretq_s16[_s32](int32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vreinterpretq_f32[_s32](int32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vreinterpretq_u8[_s32](int32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vreinterpretq_u16[_s32](int32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vreinterpretq_u32[_s32](int32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint64x2_t [__arm_]vreinterpretq_u64[_s32](int32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int64x2_t [__arm_]vreinterpretq_s64[_s32](int32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vreinterpretq_f16[_s32](int32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vreinterpretq_s8[_f32](float32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vreinterpretq_s16[_f32](float32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vreinterpretq_s32[_f32](float32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vreinterpretq_u8[_f32](float32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vreinterpretq_u16[_f32](float32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vreinterpretq_u32[_f32](float32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint64x2_t [__arm_]vreinterpretq_u64[_f32](float32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int64x2_t [__arm_]vreinterpretq_s64[_f32](float32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vreinterpretq_f16[_f32](float32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vreinterpretq_s8[_u8](uint8x16_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vreinterpretq_s16[_u8](uint8x16_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vreinterpretq_s32[_u8](uint8x16_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vreinterpretq_f32[_u8](uint8x16_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vreinterpretq_u16[_u8](uint8x16_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vreinterpretq_u32[_u8](uint8x16_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint64x2_t [__arm_]vreinterpretq_u64[_u8](uint8x16_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int64x2_t [__arm_]vreinterpretq_s64[_u8](uint8x16_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vreinterpretq_f16[_u8](uint8x16_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vreinterpretq_s8[_u16](uint16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vreinterpretq_s16[_u16](uint16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vreinterpretq_s32[_u16](uint16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vreinterpretq_f32[_u16](uint16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vreinterpretq_u8[_u16](uint16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vreinterpretq_u32[_u16](uint16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint64x2_t [__arm_]vreinterpretq_u64[_u16](uint16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int64x2_t [__arm_]vreinterpretq_s64[_u16](uint16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vreinterpretq_f16[_u16](uint16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vreinterpretq_s8[_u32](uint32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vreinterpretq_s16[_u32](uint32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vreinterpretq_s32[_u32](uint32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vreinterpretq_f32[_u32](uint32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vreinterpretq_u8[_u32](uint32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vreinterpretq_u16[_u32](uint32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint64x2_t [__arm_]vreinterpretq_u64[_u32](uint32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int64x2_t [__arm_]vreinterpretq_s64[_u32](uint32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vreinterpretq_f16[_u32](uint32x4_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vreinterpretq_s8[_u64](uint64x2_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vreinterpretq_s16[_u64](uint64x2_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vreinterpretq_s32[_u64](uint64x2_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vreinterpretq_f32[_u64](uint64x2_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vreinterpretq_u8[_u64](uint64x2_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vreinterpretq_u16[_u64](uint64x2_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vreinterpretq_u32[_u64](uint64x2_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int64x2_t [__arm_]vreinterpretq_s64[_u64](uint64x2_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vreinterpretq_f16[_u64](uint64x2_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vreinterpretq_s8[_s64](int64x2_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vreinterpretq_s16[_s64](int64x2_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vreinterpretq_s32[_s64](int64x2_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vreinterpretq_f32[_s64](int64x2_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vreinterpretq_u8[_s64](int64x2_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vreinterpretq_u16[_s64](int64x2_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vreinterpretq_u32[_s64](int64x2_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint64x2_t [__arm_]vreinterpretq_u64[_s64](int64x2_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float16x8_t [__arm_]vreinterpretq_f16[_s64](int64x2_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vreinterpretq_s8[_f16](float16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vreinterpretq_s16[_f16](float16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vreinterpretq_s32[_f16](float16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| float32x4_t [__arm_]vreinterpretq_f32[_f16](float16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vreinterpretq_u8[_f16](float16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vreinterpretq_u16[_f16](float16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vreinterpretq_u32[_f16](float16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint64x2_t [__arm_]vreinterpretq_u64[_f16](float16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int64x2_t [__arm_]vreinterpretq_s64[_f16](float16x8_t a) | a -> Qd | NOP | Qd -> result | |
+----------------------------------------------------------------+------------------------+---------------+------------------+---------------------------+
Shift
=====
Right
~~~~~
Vector bit reverse and shift right
----------------------------------
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+===========================================+========================+==========================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vbrsrq[_n_s8]( | a -> Qn | VBRSR.8 Qd, Qn, Rm | Qd -> result | |
| int8x16_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vbrsrq[_n_s16]( | a -> Qn | VBRSR.16 Qd, Qn, Rm | Qd -> result | |
| int16x8_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vbrsrq[_n_s32]( | a -> Qn | VBRSR.32 Qd, Qn, Rm | Qd -> result | |
| int32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vbrsrq[_n_u8]( | a -> Qn | VBRSR.8 Qd, Qn, Rm | Qd -> result | |
| uint8x16_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vbrsrq[_n_u16]( | a -> Qn | VBRSR.16 Qd, Qn, Rm | Qd -> result | |
| uint16x8_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vbrsrq[_n_u32]( | a -> Qn | VBRSR.32 Qd, Qn, Rm | Qd -> result | |
| uint32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vbrsrq[_n_f16]( | a -> Qn | VBRSR.16 Qd, Qn, Rm | Qd -> result | |
| float16x8_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vbrsrq[_n_f32]( | a -> Qn | VBRSR.32 Qd, Qn, Rm | Qd -> result | |
| float32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vbrsrq_m[_n_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qn | VPST | | |
| int8x16_t a, | b -> Rm | VBRSRT.8 Qd, Qn, Rm | | |
| int32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vbrsrq_m[_n_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qn | VPST | | |
| int16x8_t a, | b -> Rm | VBRSRT.16 Qd, Qn, Rm | | |
| int32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vbrsrq_m[_n_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qn | VPST | | |
| int32x4_t a, | b -> Rm | VBRSRT.32 Qd, Qn, Rm | | |
| int32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vbrsrq_m[_n_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qn | VPST | | |
| uint8x16_t a, | b -> Rm | VBRSRT.8 Qd, Qn, Rm | | |
| int32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vbrsrq_m[_n_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qn | VPST | | |
| uint16x8_t a, | b -> Rm | VBRSRT.16 Qd, Qn, Rm | | |
| int32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vbrsrq_m[_n_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qn | VPST | | |
| uint32x4_t a, | b -> Rm | VBRSRT.32 Qd, Qn, Rm | | |
| int32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vbrsrq_m[_n_f16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float16x8_t inactive, | a -> Qn | VPST | | |
| float16x8_t a, | b -> Rm | VBRSRT.16 Qd, Qn, Rm | | |
| int32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vbrsrq_m[_n_f32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| float32x4_t inactive, | a -> Qn | VPST | | |
| float32x4_t a, | b -> Rm | VBRSRT.32 Qd, Qn, Rm | | |
| int32_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vbrsrq_x[_n_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VBRSRT.8 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vbrsrq_x[_n_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VBRSRT.16 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vbrsrq_x[_n_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VBRSRT.32 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vbrsrq_x[_n_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VBRSRT.8 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vbrsrq_x[_n_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VBRSRT.16 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vbrsrq_x[_n_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VBRSRT.32 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vbrsrq_x[_n_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VBRSRT.16 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vbrsrq_x[_n_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VBRSRT.32 Qd, Qn, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
Vector saturating rounding shift right and narrow
-------------------------------------------------
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+==============================================+========================+=================================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrshrnbq[_n_s16]( | a -> Qd | VQRSHRNB.S16 Qd, Qm, #imm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int16x8_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrshrnbq[_n_s32]( | a -> Qd | VQRSHRNB.S32 Qd, Qm, #imm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int32x4_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqrshrnbq[_n_u16]( | a -> Qd | VQRSHRNB.U16 Qd, Qm, #imm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint16x8_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqrshrnbq[_n_u32]( | a -> Qd | VQRSHRNB.U32 Qd, Qm, #imm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint32x4_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrshrnbq_m[_n_s16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | 1 <= imm <= 8 | VQRSHRNBT.S16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrshrnbq_m[_n_s32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int32x4_t b, | 1 <= imm <= 16 | VQRSHRNBT.S32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqrshrnbq_m[_n_u16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | 1 <= imm <= 8 | VQRSHRNBT.U16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqrshrnbq_m[_n_u32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | 1 <= imm <= 16 | VQRSHRNBT.U32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrshrntq[_n_s16]( | a -> Qd | VQRSHRNT.S16 Qd, Qm, #imm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int16x8_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrshrntq[_n_s32]( | a -> Qd | VQRSHRNT.S32 Qd, Qm, #imm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int32x4_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqrshrntq[_n_u16]( | a -> Qd | VQRSHRNT.U16 Qd, Qm, #imm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint16x8_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqrshrntq[_n_u32]( | a -> Qd | VQRSHRNT.U32 Qd, Qm, #imm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint32x4_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrshrntq_m[_n_s16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | 1 <= imm <= 8 | VQRSHRNTT.S16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrshrntq_m[_n_s32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int32x4_t b, | 1 <= imm <= 16 | VQRSHRNTT.S32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqrshrntq_m[_n_u16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | 1 <= imm <= 8 | VQRSHRNTT.U16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqrshrntq_m[_n_u32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | 1 <= imm <= 16 | VQRSHRNTT.U32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqrshrunbq[_n_s16]( | a -> Qd | VQRSHRUNB.S16 Qd, Qm, #imm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| int16x8_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqrshrunbq[_n_s32]( | a -> Qd | VQRSHRUNB.S32 Qd, Qm, #imm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| int32x4_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqrshrunbq_m[_n_s16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | 1 <= imm <= 8 | VQRSHRUNBT.S16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqrshrunbq_m[_n_s32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| int32x4_t b, | 1 <= imm <= 16 | VQRSHRUNBT.S32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqrshruntq[_n_s16]( | a -> Qd | VQRSHRUNT.S16 Qd, Qm, #imm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| int16x8_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqrshruntq[_n_s32]( | a -> Qd | VQRSHRUNT.S32 Qd, Qm, #imm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| int32x4_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqrshruntq_m[_n_s16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | 1 <= imm <= 8 | VQRSHRUNTT.S16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqrshruntq_m[_n_s32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| int32x4_t b, | 1 <= imm <= 16 | VQRSHRUNTT.S32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqshrnbq[_n_s16]( | a -> Qd | VQSHRNB.S16 Qd, Qm, #imm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int16x8_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqshrnbq[_n_s32]( | a -> Qd | VQSHRNB.S32 Qd, Qm, #imm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int32x4_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqshrnbq[_n_u16]( | a -> Qd | VQSHRNB.U16 Qd, Qm, #imm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint16x8_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqshrnbq[_n_u32]( | a -> Qd | VQSHRNB.U32 Qd, Qm, #imm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint32x4_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqshrnbq_m[_n_s16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | 1 <= imm <= 8 | VQSHRNBT.S16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqshrnbq_m[_n_s32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int32x4_t b, | 1 <= imm <= 16 | VQSHRNBT.S32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqshrnbq_m[_n_u16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | 1 <= imm <= 8 | VQSHRNBT.U16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqshrnbq_m[_n_u32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | 1 <= imm <= 16 | VQSHRNBT.U32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqshrntq[_n_s16]( | a -> Qd | VQSHRNT.S16 Qd, Qm, #imm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int16x8_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqshrntq[_n_s32]( | a -> Qd | VQSHRNT.S32 Qd, Qm, #imm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int32x4_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqshrntq[_n_u16]( | a -> Qd | VQSHRNT.U16 Qd, Qm, #imm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint16x8_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqshrntq[_n_u32]( | a -> Qd | VQSHRNT.U32 Qd, Qm, #imm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint32x4_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqshrntq_m[_n_s16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | 1 <= imm <= 8 | VQSHRNTT.S16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqshrntq_m[_n_s32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int32x4_t b, | 1 <= imm <= 16 | VQSHRNTT.S32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqshrntq_m[_n_u16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | 1 <= imm <= 8 | VQSHRNTT.U16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqshrntq_m[_n_u32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | 1 <= imm <= 16 | VQSHRNTT.U32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqshrunbq[_n_s16]( | a -> Qd | VQSHRUNB.S16 Qd, Qm, #imm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| int16x8_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqshrunbq[_n_s32]( | a -> Qd | VQSHRUNB.S32 Qd, Qm, #imm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| int32x4_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqshrunbq_m[_n_s16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | 1 <= imm <= 8 | VQSHRUNBT.S16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqshrunbq_m[_n_s32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| int32x4_t b, | 1 <= imm <= 16 | VQSHRUNBT.S32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqshruntq[_n_s16]( | a -> Qd | VQSHRUNT.S16 Qd, Qm, #imm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| int16x8_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqshruntq[_n_s32]( | a -> Qd | VQSHRUNT.S32 Qd, Qm, #imm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| int32x4_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqshruntq_m[_n_s16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | 1 <= imm <= 8 | VQSHRUNTT.S16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqshruntq_m[_n_s32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| int32x4_t b, | 1 <= imm <= 16 | VQSHRUNTT.S32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------+------------------------+---------------------------------+------------------+---------------------------+
Vector rounding shift right and narrow
--------------------------------------
+--------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+============================================+========================+===============================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrshrnbq[_n_s16]( | a -> Qd | VRSHRNB.I16 Qd, Qm, #imm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int16x8_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+--------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vrshrnbq[_n_s32]( | a -> Qd | VRSHRNB.I32 Qd, Qm, #imm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int32x4_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+--------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrshrnbq[_n_u16]( | a -> Qd | VRSHRNB.I16 Qd, Qm, #imm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint16x8_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+--------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vrshrnbq[_n_u32]( | a -> Qd | VRSHRNB.I32 Qd, Qm, #imm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint32x4_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+--------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrshrnbq_m[_n_s16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | 1 <= imm <= 8 | VRSHRNBT.I16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vrshrnbq_m[_n_s32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int32x4_t b, | 1 <= imm <= 16 | VRSHRNBT.I32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrshrnbq_m[_n_u16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | 1 <= imm <= 8 | VRSHRNBT.I16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vrshrnbq_m[_n_u32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | 1 <= imm <= 16 | VRSHRNBT.I32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrshrntq[_n_s16]( | a -> Qd | VRSHRNT.I16 Qd, Qm, #imm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int16x8_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+--------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vrshrntq[_n_s32]( | a -> Qd | VRSHRNT.I32 Qd, Qm, #imm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int32x4_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+--------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrshrntq[_n_u16]( | a -> Qd | VRSHRNT.I16 Qd, Qm, #imm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint16x8_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+--------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vrshrntq[_n_u32]( | a -> Qd | VRSHRNT.I32 Qd, Qm, #imm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint32x4_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+--------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrshrntq_m[_n_s16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | 1 <= imm <= 8 | VRSHRNTT.I16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vrshrntq_m[_n_s32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int32x4_t b, | 1 <= imm <= 16 | VRSHRNTT.I32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrshrntq_m[_n_u16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | 1 <= imm <= 8 | VRSHRNTT.I16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vrshrntq_m[_n_u32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | 1 <= imm <= 16 | VRSHRNTT.I32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+--------------------------------------------+------------------------+-------------------------------+------------------+---------------------------+
Vector rounding shift right
---------------------------
+------------------------------------------+------------------------+-----------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+==========================================+========================+=============================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vrshrq[_n_s8]( | a -> Qm | VRSHR.S8 Qd, Qm, #imm | Qd -> result | |
| int8x16_t a, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+------------------------------------------+------------------------+-----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vrshrq[_n_s16]( | a -> Qm | VRSHR.S16 Qd, Qm, #imm | Qd -> result | |
| int16x8_t a, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+------------------------------------------+------------------------+-----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vrshrq[_n_s32]( | a -> Qm | VRSHR.S32 Qd, Qm, #imm | Qd -> result | |
| int32x4_t a, | 1 <= imm <= 32 | | | |
| const int imm) | | | | |
+------------------------------------------+------------------------+-----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vrshrq[_n_u8]( | a -> Qm | VRSHR.U8 Qd, Qm, #imm | Qd -> result | |
| uint8x16_t a, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+------------------------------------------+------------------------+-----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vrshrq[_n_u16]( | a -> Qm | VRSHR.U16 Qd, Qm, #imm | Qd -> result | |
| uint16x8_t a, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+------------------------------------------+------------------------+-----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vrshrq[_n_u32]( | a -> Qm | VRSHR.U32 Qd, Qm, #imm | Qd -> result | |
| uint32x4_t a, | 1 <= imm <= 32 | | | |
| const int imm) | | | | |
+------------------------------------------+------------------------+-----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrshrq_m[_n_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | 1 <= imm <= 8 | VRSHRT.S8 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+-----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vrshrq_m[_n_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | 1 <= imm <= 16 | VRSHRT.S16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+-----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vrshrq_m[_n_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| int32x4_t a, | 1 <= imm <= 32 | VRSHRT.S32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+-----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrshrq_m[_n_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qm | VPST | | |
| uint8x16_t a, | 1 <= imm <= 8 | VRSHRT.U8 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+-----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vrshrq_m[_n_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| uint16x8_t a, | 1 <= imm <= 16 | VRSHRT.U16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+-----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vrshrq_m[_n_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| uint32x4_t a, | 1 <= imm <= 32 | VRSHRT.U32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+-----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrshrq_x[_n_s8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | 1 <= imm <= 8 | VPST | | |
| const int imm, | p -> Rp | VRSHRT.S8 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+-----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vrshrq_x[_n_s16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | 1 <= imm <= 16 | VPST | | |
| const int imm, | p -> Rp | VRSHRT.S16 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+-----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vrshrq_x[_n_s32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | 1 <= imm <= 32 | VPST | | |
| const int imm, | p -> Rp | VRSHRT.S32 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+-----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrshrq_x[_n_u8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | 1 <= imm <= 8 | VPST | | |
| const int imm, | p -> Rp | VRSHRT.U8 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+-----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vrshrq_x[_n_u16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | 1 <= imm <= 16 | VPST | | |
| const int imm, | p -> Rp | VRSHRT.U16 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+-----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vrshrq_x[_n_u32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | 1 <= imm <= 32 | VPST | | |
| const int imm, | p -> Rp | VRSHRT.U32 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+-----------------------------+------------------+---------------------------+
Vector shift right and narrow
-----------------------------
+-------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+===========================================+========================+==============================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vshrnbq[_n_s16]( | a -> Qd | VSHRNB.I16 Qd, Qm, #imm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int16x8_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshrnbq[_n_s32]( | a -> Qd | VSHRNB.I32 Qd, Qm, #imm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int32x4_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vshrnbq[_n_u16]( | a -> Qd | VSHRNB.I16 Qd, Qm, #imm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint16x8_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshrnbq[_n_u32]( | a -> Qd | VSHRNB.I32 Qd, Qm, #imm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint32x4_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vshrnbq_m[_n_s16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | 1 <= imm <= 8 | VSHRNBT.I16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshrnbq_m[_n_s32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int32x4_t b, | 1 <= imm <= 16 | VSHRNBT.I32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vshrnbq_m[_n_u16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | 1 <= imm <= 8 | VSHRNBT.I16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshrnbq_m[_n_u32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | 1 <= imm <= 16 | VSHRNBT.I32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vshrntq[_n_s16]( | a -> Qd | VSHRNT.I16 Qd, Qm, #imm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int16x8_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshrntq[_n_s32]( | a -> Qd | VSHRNT.I32 Qd, Qm, #imm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int32x4_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vshrntq[_n_u16]( | a -> Qd | VSHRNT.I16 Qd, Qm, #imm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint16x8_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshrntq[_n_u32]( | a -> Qd | VSHRNT.I32 Qd, Qm, #imm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint32x4_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vshrntq_m[_n_s16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | 1 <= imm <= 8 | VSHRNTT.I16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshrntq_m[_n_s32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int32x4_t b, | 1 <= imm <= 16 | VSHRNTT.I32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vshrntq_m[_n_u16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | 1 <= imm <= 8 | VSHRNTT.I16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshrntq_m[_n_u32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | 1 <= imm <= 16 | VSHRNTT.I32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+------------------+---------------------------+
Vector shift right
------------------
+-----------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+=========================================+========================+============================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vshrq[_n_s8]( | a -> Qm | VSHR.S8 Qd, Qm, #imm | Qd -> result | |
| int8x16_t a, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+-----------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vshrq[_n_s16]( | a -> Qm | VSHR.S16 Qd, Qm, #imm | Qd -> result | |
| int16x8_t a, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+-----------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vshrq[_n_s32]( | a -> Qm | VSHR.S32 Qd, Qm, #imm | Qd -> result | |
| int32x4_t a, | 1 <= imm <= 32 | | | |
| const int imm) | | | | |
+-----------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vshrq[_n_u8]( | a -> Qm | VSHR.U8 Qd, Qm, #imm | Qd -> result | |
| uint8x16_t a, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+-----------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vshrq[_n_u16]( | a -> Qm | VSHR.U16 Qd, Qm, #imm | Qd -> result | |
| uint16x8_t a, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+-----------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vshrq[_n_u32]( | a -> Qm | VSHR.U32 Qd, Qm, #imm | Qd -> result | |
| uint32x4_t a, | 1 <= imm <= 32 | | | |
| const int imm) | | | | |
+-----------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vshrq_m[_n_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | 1 <= imm <= 8 | VSHRT.S8 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshrq_m[_n_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | 1 <= imm <= 16 | VSHRT.S16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vshrq_m[_n_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| int32x4_t a, | 1 <= imm <= 32 | VSHRT.S32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vshrq_m[_n_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qm | VPST | | |
| uint8x16_t a, | 1 <= imm <= 8 | VSHRT.U8 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshrq_m[_n_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| uint16x8_t a, | 1 <= imm <= 16 | VSHRT.U16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vshrq_m[_n_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| uint32x4_t a, | 1 <= imm <= 32 | VSHRT.U32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vshrq_x[_n_s8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | 1 <= imm <= 8 | VPST | | |
| const int imm, | p -> Rp | VSHRT.S8 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshrq_x[_n_s16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | 1 <= imm <= 16 | VPST | | |
| const int imm, | p -> Rp | VSHRT.S16 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vshrq_x[_n_s32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | 1 <= imm <= 32 | VPST | | |
| const int imm, | p -> Rp | VSHRT.S32 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vshrq_x[_n_u8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | 1 <= imm <= 8 | VPST | | |
| const int imm, | p -> Rp | VSHRT.U8 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshrq_x[_n_u16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | 1 <= imm <= 16 | VPST | | |
| const int imm, | p -> Rp | VSHRT.U16 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+----------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vshrq_x[_n_u32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | 1 <= imm <= 32 | VPST | | |
| const int imm, | p -> Rp | VSHRT.U32 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+----------------------------+------------------+---------------------------+
Vector shift right and insert
-----------------------------
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+=========================================+========================+===========================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vsriq[_n_s8]( | a -> Qd | VSRI.8 Qd, Qm, #imm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vsriq[_n_s16]( | a -> Qd | VSRI.16 Qd, Qm, #imm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vsriq[_n_s32]( | a -> Qd | VSRI.32 Qd, Qm, #imm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b, | 1 <= imm <= 32 | | | |
| const int imm) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vsriq[_n_u8]( | a -> Qd | VSRI.8 Qd, Qm, #imm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vsriq[_n_u16]( | a -> Qd | VSRI.16 Qd, Qm, #imm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vsriq[_n_u32]( | a -> Qd | VSRI.32 Qd, Qm, #imm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b, | 1 <= imm <= 32 | | | |
| const int imm) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vsriq_m[_n_s8]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | 1 <= imm <= 8 | VSRIT.8 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vsriq_m[_n_s16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | 1 <= imm <= 16 | VSRIT.16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vsriq_m[_n_s32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | 1 <= imm <= 32 | VSRIT.32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vsriq_m[_n_u8]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | 1 <= imm <= 8 | VSRIT.8 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vsriq_m[_n_u16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | 1 <= imm <= 16 | VSRIT.16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vsriq_m[_n_u32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | 1 <= imm <= 32 | VSRIT.32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
Left
~~~~
Vector saturating rounding shift left
-------------------------------------
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+===========================================+=============================+===================================+=============================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrshlq[_n_s8]( | a -> Qda | VQRSHL.S8 Qda, Rm | Qda -> result | |
| int8x16_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrshlq[_n_s16]( | a -> Qda | VQRSHL.S16 Qda, Rm | Qda -> result | |
| int16x8_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqrshlq[_n_s32]( | a -> Qda | VQRSHL.S32 Qda, Rm | Qda -> result | |
| int32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqrshlq[_n_u8]( | a -> Qda | VQRSHL.U8 Qda, Rm | Qda -> result | |
| uint8x16_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqrshlq[_n_u16]( | a -> Qda | VQRSHL.U16 Qda, Rm | Qda -> result | |
| uint16x8_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vqrshlq[_n_u32]( | a -> Qda | VQRSHL.U32 Qda, Rm | Qda -> result | |
| uint32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrshlq_m_n[_s8]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| int8x16_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VQRSHLT.S8 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrshlq_m_n[_s16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| int16x8_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VQRSHLT.S16 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqrshlq_m_n[_s32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| int32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VQRSHLT.S32 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqrshlq_m_n[_u8]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint8x16_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VQRSHLT.U8 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqrshlq_m_n[_u16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint16x8_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VQRSHLT.U16 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vqrshlq_m_n[_u32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VQRSHLT.U32 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vqrshlq[_s8]( | a -> Qm | VQRSHL.S8 Qd, Qm, Qn | Qd -> result | |
| int8x16_t a, | b -> Qn | | | |
| int8x16_t b) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vqrshlq[_s16]( | a -> Qm | VQRSHL.S16 Qd, Qm, Qn | Qd -> result | |
| int16x8_t a, | b -> Qn | | | |
| int16x8_t b) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vqrshlq[_s32]( | a -> Qm | VQRSHL.S32 Qd, Qm, Qn | Qd -> result | |
| int32x4_t a, | b -> Qn | | | |
| int32x4_t b) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vqrshlq[_u8]( | a -> Qm | VQRSHL.U8 Qd, Qm, Qn | Qd -> result | |
| uint8x16_t a, | b -> Qn | | | |
| int8x16_t b) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vqrshlq[_u16]( | a -> Qm | VQRSHL.U16 Qd, Qm, Qn | Qd -> result | |
| uint16x8_t a, | b -> Qn | | | |
| int16x8_t b) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vqrshlq[_u32]( | a -> Qm | VQRSHL.U32 Qd, Qm, Qn | Qd -> result | |
| uint32x4_t a, | b -> Qn | | | |
| int32x4_t b) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqrshlq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | b -> Qn | VQRSHLT.S8 Qd, Qm, Qn | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqrshlq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | b -> Qn | VQRSHLT.S16 Qd, Qm, Qn | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqrshlq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| int32x4_t a, | b -> Qn | VQRSHLT.S32 Qd, Qm, Qn | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqrshlq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qm | VPST | | |
| uint8x16_t a, | b -> Qn | VQRSHLT.U8 Qd, Qm, Qn | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqrshlq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| uint16x8_t a, | b -> Qn | VQRSHLT.U16 Qd, Qm, Qn | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vqrshlq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| uint32x4_t a, | b -> Qn | VQRSHLT.U32 Qd, Qm, Qn | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]uqrshll( | value -> [RdaHi,RdaLo] | UQRSHLL RdaLo, RdaHi, #64, Rm | [RdaHi,RdaLo] -> result | |
| uint64_t value, | shift -> Rm | | | |
| int32_t shift) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]uqrshll_sat48( | value -> [RdaHi,RdaLo] | UQRSHLL RdaLo, RdaHi, #48, Rm | [RdaHi,RdaLo] -> result | |
| uint64_t value, | shift -> Rm | | | |
| int32_t shift) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]uqshll( | value -> [RdaHi,RdaLo] | UQSHLL RdaLo, RdaHi, #shift | [RdaHi,RdaLo] -> result | |
| uint64_t value, | 1 <= shift <= 32 | | | |
| const int shift) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]uqrshl( | value -> Rda | UQRSHL Rda, Rm | Rda -> result | |
| uint32_t value, | shift -> Rm | | | |
| int32_t shift) | | | | |
+-------------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
Vector saturating shift left
----------------------------
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+===========================================+=============================+=================================+=============================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vqshlq[_s8]( | a -> Qm | VQSHL.S8 Qd, Qm, Qn | Qd -> result | |
| int8x16_t a, | b -> Qn | | | |
| int8x16_t b) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vqshlq[_s16]( | a -> Qm | VQSHL.S16 Qd, Qm, Qn | Qd -> result | |
| int16x8_t a, | b -> Qn | | | |
| int16x8_t b) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vqshlq[_s32]( | a -> Qm | VQSHL.S32 Qd, Qm, Qn | Qd -> result | |
| int32x4_t a, | b -> Qn | | | |
| int32x4_t b) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vqshlq[_u8]( | a -> Qm | VQSHL.U8 Qd, Qm, Qn | Qd -> result | |
| uint8x16_t a, | b -> Qn | | | |
| int8x16_t b) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vqshlq[_u16]( | a -> Qm | VQSHL.U16 Qd, Qm, Qn | Qd -> result | |
| uint16x8_t a, | b -> Qn | | | |
| int16x8_t b) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vqshlq[_u32]( | a -> Qm | VQSHL.U32 Qd, Qm, Qn | Qd -> result | |
| uint32x4_t a, | b -> Qn | | | |
| int32x4_t b) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqshlq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | b -> Qn | VQSHLT.S8 Qd, Qm, Qn | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqshlq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | b -> Qn | VQSHLT.S16 Qd, Qm, Qn | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqshlq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| int32x4_t a, | b -> Qn | VQSHLT.S32 Qd, Qm, Qn | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqshlq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qm | VPST | | |
| uint8x16_t a, | b -> Qn | VQSHLT.U8 Qd, Qm, Qn | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqshlq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| uint16x8_t a, | b -> Qn | VQSHLT.U16 Qd, Qm, Qn | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vqshlq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| uint32x4_t a, | b -> Qn | VQSHLT.U32 Qd, Qm, Qn | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vqshlq_n[_s8]( | a -> Qm | VQSHL.S8 Qd, Qm, #imm | Qd -> result | |
| int8x16_t a, | 0 <= imm <= 7 | | | |
| const int imm) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vqshlq_n[_s16]( | a -> Qm | VQSHL.S16 Qd, Qm, #imm | Qd -> result | |
| int16x8_t a, | 0 <= imm <= 15 | | | |
| const int imm) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vqshlq_n[_s32]( | a -> Qm | VQSHL.S32 Qd, Qm, #imm | Qd -> result | |
| int32x4_t a, | 0 <= imm <= 31 | | | |
| const int imm) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vqshlq_n[_u8]( | a -> Qm | VQSHL.U8 Qd, Qm, #imm | Qd -> result | |
| uint8x16_t a, | 0 <= imm <= 7 | | | |
| const int imm) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vqshlq_n[_u16]( | a -> Qm | VQSHL.U16 Qd, Qm, #imm | Qd -> result | |
| uint16x8_t a, | 0 <= imm <= 15 | | | |
| const int imm) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vqshlq_n[_u32]( | a -> Qm | VQSHL.U32 Qd, Qm, #imm | Qd -> result | |
| uint32x4_t a, | 0 <= imm <= 31 | | | |
| const int imm) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqshlq_m_n[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | 0 <= imm <= 7 | VQSHLT.S8 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqshlq_m_n[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | 0 <= imm <= 15 | VQSHLT.S16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqshlq_m_n[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| int32x4_t a, | 0 <= imm <= 31 | VQSHLT.S32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqshlq_m_n[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qm | VPST | | |
| uint8x16_t a, | 0 <= imm <= 7 | VQSHLT.U8 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqshlq_m_n[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| uint16x8_t a, | 0 <= imm <= 15 | VQSHLT.U16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vqshlq_m_n[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| uint32x4_t a, | 0 <= imm <= 31 | VQSHLT.U32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqshlq_r[_s8]( | a -> Qda | VQSHL.S8 Qda, Rm | Qda -> result | |
| int8x16_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqshlq_r[_s16]( | a -> Qda | VQSHL.S16 Qda, Rm | Qda -> result | |
| int16x8_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqshlq_r[_s32]( | a -> Qda | VQSHL.S32 Qda, Rm | Qda -> result | |
| int32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqshlq_r[_u8]( | a -> Qda | VQSHL.U8 Qda, Rm | Qda -> result | |
| uint8x16_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqshlq_r[_u16]( | a -> Qda | VQSHL.U16 Qda, Rm | Qda -> result | |
| uint16x8_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vqshlq_r[_u32]( | a -> Qda | VQSHL.U32 Qda, Rm | Qda -> result | |
| uint32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqshlq_m_r[_s8]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| int8x16_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VQSHLT.S8 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqshlq_m_r[_s16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| int16x8_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VQSHLT.S16 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vqshlq_m_r[_s32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| int32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VQSHLT.S32 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqshlq_m_r[_u8]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint8x16_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VQSHLT.U8 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqshlq_m_r[_u16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint16x8_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VQSHLT.U16 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vqshlq_m_r[_u32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VQSHLT.U32 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqshluq[_n_s8]( | a -> Qm | VQSHLU.S8 Qd, Qm, #imm | Qd -> result | |
| int8x16_t a, | 0 <= imm <= 7 | | | |
| const int imm) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqshluq[_n_s16]( | a -> Qm | VQSHLU.S16 Qd, Qm, #imm | Qd -> result | |
| int16x8_t a, | 0 <= imm <= 15 | | | |
| const int imm) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vqshluq[_n_s32]( | a -> Qm | VQSHLU.S32 Qd, Qm, #imm | Qd -> result | |
| int32x4_t a, | 0 <= imm <= 31 | | | |
| const int imm) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqshluq_m[_n_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | 0 <= imm <= 7 | VQSHLUT.S8 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqshluq_m[_n_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | 0 <= imm <= 15 | VQSHLUT.S16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vqshluq_m[_n_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| int32x4_t a, | 0 <= imm <= 31 | VQSHLUT.S32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]sqshll( | value -> [RdaHi,RdaLo] | SQSHLL RdaLo, RdaHi, #shift | [RdaHi,RdaLo] -> result | |
| int64_t value, | 1 <= shift <= 32 | | | |
| const int shift) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]uqshl( | value -> Rda | UQSHL Rda, #shift | Rda -> result | |
| uint32_t value, | 1 <= shift <= 32 | | | |
| const int shift) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]sqshl( | value -> Rda | SQSHL Rda, #shift | Rda -> result | |
| int32_t value, | 1 <= shift <= 32 | | | |
| const int shift) | | | | |
+-------------------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
Vector rounding shift left
--------------------------
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+==========================================+========================+===========================+===================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrshlq[_n_s8]( | a -> Qda | VRSHL.S8 Qda, Rm | Qda -> result | |
| int8x16_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vrshlq[_n_s16]( | a -> Qda | VRSHL.S16 Qda, Rm | Qda -> result | |
| int16x8_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vrshlq[_n_s32]( | a -> Qda | VRSHL.S32 Qda, Rm | Qda -> result | |
| int32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrshlq[_n_u8]( | a -> Qda | VRSHL.U8 Qda, Rm | Qda -> result | |
| uint8x16_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vrshlq[_n_u16]( | a -> Qda | VRSHL.U16 Qda, Rm | Qda -> result | |
| uint16x8_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vrshlq[_n_u32]( | a -> Qda | VRSHL.U32 Qda, Rm | Qda -> result | |
| uint32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrshlq_m_n[_s8]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| int8x16_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VRSHLT.S8 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vrshlq_m_n[_s16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| int16x8_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VRSHLT.S16 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vrshlq_m_n[_s32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| int32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VRSHLT.S32 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrshlq_m_n[_u8]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint8x16_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VRSHLT.U8 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vrshlq_m_n[_u16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint16x8_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VRSHLT.U16 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vrshlq_m_n[_u32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VRSHLT.U32 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vrshlq[_s8]( | a -> Qm | VRSHL.S8 Qd, Qm, Qn | Qd -> result | |
| int8x16_t a, | b -> Qn | | | |
| int8x16_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vrshlq[_s16]( | a -> Qm | VRSHL.S16 Qd, Qm, Qn | Qd -> result | |
| int16x8_t a, | b -> Qn | | | |
| int16x8_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vrshlq[_s32]( | a -> Qm | VRSHL.S32 Qd, Qm, Qn | Qd -> result | |
| int32x4_t a, | b -> Qn | | | |
| int32x4_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vrshlq[_u8]( | a -> Qm | VRSHL.U8 Qd, Qm, Qn | Qd -> result | |
| uint8x16_t a, | b -> Qn | | | |
| int8x16_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vrshlq[_u16]( | a -> Qm | VRSHL.U16 Qd, Qm, Qn | Qd -> result | |
| uint16x8_t a, | b -> Qn | | | |
| int16x8_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vrshlq[_u32]( | a -> Qm | VRSHL.U32 Qd, Qm, Qn | Qd -> result | |
| uint32x4_t a, | b -> Qn | | | |
| int32x4_t b) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrshlq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | b -> Qn | VRSHLT.S8 Qd, Qm, Qn | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vrshlq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | b -> Qn | VRSHLT.S16 Qd, Qm, Qn | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vrshlq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| int32x4_t a, | b -> Qn | VRSHLT.S32 Qd, Qm, Qn | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrshlq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qm | VPST | | |
| uint8x16_t a, | b -> Qn | VRSHLT.U8 Qd, Qm, Qn | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vrshlq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| uint16x8_t a, | b -> Qn | VRSHLT.U16 Qd, Qm, Qn | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vrshlq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| uint32x4_t a, | b -> Qn | VRSHLT.U32 Qd, Qm, Qn | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vrshlq_x[_s8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qn | VPST | | |
| int8x16_t b, | p -> Rp | VRSHLT.S8 Qd, Qm, Qn | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vrshlq_x[_s16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qn | VPST | | |
| int16x8_t b, | p -> Rp | VRSHLT.S16 Qd, Qm, Qn | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vrshlq_x[_s32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qn | VPST | | |
| int32x4_t b, | p -> Rp | VRSHLT.S32 Qd, Qm, Qn | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vrshlq_x[_u8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qn | VPST | | |
| int8x16_t b, | p -> Rp | VRSHLT.U8 Qd, Qm, Qn | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vrshlq_x[_u16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qn | VPST | | |
| int16x8_t b, | p -> Rp | VRSHLT.U16 Qd, Qm, Qn | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vrshlq_x[_u32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qn | VPST | | |
| int32x4_t b, | p -> Rp | VRSHLT.U32 Qd, Qm, Qn | | |
| mve_pred16_t p) | | | | |
+------------------------------------------+------------------------+---------------------------+-------------------+---------------------------+
Whole vector left shift with carry
----------------------------------
+----------------------------------------+------------------------+---------------------------+--------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+========================================+========================+===========================+====================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vshlcq[_s8]( | a -> Qda | VSHLC Qda, Rdm, #imm | Qda -> result | |
| int8x16_t a, | *b -> Rdm | | Rdm -> *b | |
| uint32_t *b, | 1 <= imm <= 32 | | | |
| const int imm) | | | | |
+----------------------------------------+------------------------+---------------------------+--------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshlcq[_s16]( | a -> Qda | VSHLC Qda, Rdm, #imm | Qda -> result | |
| int16x8_t a, | *b -> Rdm | | Rdm -> *b | |
| uint32_t *b, | 1 <= imm <= 32 | | | |
| const int imm) | | | | |
+----------------------------------------+------------------------+---------------------------+--------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vshlcq[_s32]( | a -> Qda | VSHLC Qda, Rdm, #imm | Qda -> result | |
| int32x4_t a, | *b -> Rdm | | Rdm -> *b | |
| uint32_t *b, | 1 <= imm <= 32 | | | |
| const int imm) | | | | |
+----------------------------------------+------------------------+---------------------------+--------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vshlcq[_u8]( | a -> Qda | VSHLC Qda, Rdm, #imm | Qda -> result | |
| uint8x16_t a, | *b -> Rdm | | Rdm -> *b | |
| uint32_t *b, | 1 <= imm <= 32 | | | |
| const int imm) | | | | |
+----------------------------------------+------------------------+---------------------------+--------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshlcq[_u16]( | a -> Qda | VSHLC Qda, Rdm, #imm | Qda -> result | |
| uint16x8_t a, | *b -> Rdm | | Rdm -> *b | |
| uint32_t *b, | 1 <= imm <= 32 | | | |
| const int imm) | | | | |
+----------------------------------------+------------------------+---------------------------+--------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vshlcq[_u32]( | a -> Qda | VSHLC Qda, Rdm, #imm | Qda -> result | |
| uint32x4_t a, | *b -> Rdm | | Rdm -> *b | |
| uint32_t *b, | 1 <= imm <= 32 | | | |
| const int imm) | | | | |
+----------------------------------------+------------------------+---------------------------+--------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vshlcq_m[_s8]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| int8x16_t a, | *b -> Rdm | VPST | Rdm -> *b | |
| uint32_t *b, | 1 <= imm <= 32 | VSHLCT Qda, Rdm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+---------------------------+--------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshlcq_m[_s16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| int16x8_t a, | *b -> Rdm | VPST | Rdm -> *b | |
| uint32_t *b, | 1 <= imm <= 32 | VSHLCT Qda, Rdm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+---------------------------+--------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vshlcq_m[_s32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| int32x4_t a, | *b -> Rdm | VPST | Rdm -> *b | |
| uint32_t *b, | 1 <= imm <= 32 | VSHLCT Qda, Rdm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+---------------------------+--------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vshlcq_m[_u8]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint8x16_t a, | *b -> Rdm | VPST | Rdm -> *b | |
| uint32_t *b, | 1 <= imm <= 32 | VSHLCT Qda, Rdm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+---------------------------+--------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshlcq_m[_u16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint16x8_t a, | *b -> Rdm | VPST | Rdm -> *b | |
| uint32_t *b, | 1 <= imm <= 32 | VSHLCT Qda, Rdm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+---------------------------+--------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vshlcq_m[_u32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint32x4_t a, | *b -> Rdm | VPST | Rdm -> *b | |
| uint32_t *b, | 1 <= imm <= 32 | VSHLCT Qda, Rdm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+----------------------------------------+------------------------+---------------------------+--------------------+---------------------------+
Vector shift left
-----------------
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+===========================================+========================+==============================+===================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshllbq[_n_s8]( | a -> Qm | VSHLLB.S8 Qd, Qm, #imm | Qd -> result | |
| int8x16_t a, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vshllbq[_n_s16]( | a -> Qm | VSHLLB.S16 Qd, Qm, #imm | Qd -> result | |
| int16x8_t a, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshllbq[_n_u8]( | a -> Qm | VSHLLB.U8 Qd, Qm, #imm | Qd -> result | |
| uint8x16_t a, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vshllbq[_n_u16]( | a -> Qm | VSHLLB.U16 Qd, Qm, #imm | Qd -> result | |
| uint16x8_t a, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshllbq_m[_n_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | 1 <= imm <= 8 | VSHLLBT.S8 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vshllbq_m[_n_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | 1 <= imm <= 16 | VSHLLBT.S16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshllbq_m[_n_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| uint8x16_t a, | 1 <= imm <= 8 | VSHLLBT.U8 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vshllbq_m[_n_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| uint16x8_t a, | 1 <= imm <= 16 | VSHLLBT.U16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshllbq_x[_n_s8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | 1 <= imm <= 8 | VPST | | |
| const int imm, | p -> Rp | VSHLLBT.S8 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vshllbq_x[_n_s16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | 1 <= imm <= 16 | VPST | | |
| const int imm, | p -> Rp | VSHLLBT.S16 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshllbq_x[_n_u8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | 1 <= imm <= 8 | VPST | | |
| const int imm, | p -> Rp | VSHLLBT.U8 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vshllbq_x[_n_u16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | 1 <= imm <= 16 | VPST | | |
| const int imm, | p -> Rp | VSHLLBT.U16 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshlltq[_n_s8]( | a -> Qm | VSHLLT.S8 Qd, Qm, #imm | Qd -> result | |
| int8x16_t a, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vshlltq[_n_s16]( | a -> Qm | VSHLLT.S16 Qd, Qm, #imm | Qd -> result | |
| int16x8_t a, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshlltq[_n_u8]( | a -> Qm | VSHLLT.U8 Qd, Qm, #imm | Qd -> result | |
| uint8x16_t a, | 1 <= imm <= 8 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vshlltq[_n_u16]( | a -> Qm | VSHLLT.U16 Qd, Qm, #imm | Qd -> result | |
| uint16x8_t a, | 1 <= imm <= 16 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshlltq_m[_n_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | 1 <= imm <= 8 | VSHLLTT.S8 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vshlltq_m[_n_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | 1 <= imm <= 16 | VSHLLTT.S16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshlltq_m[_n_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| uint8x16_t a, | 1 <= imm <= 8 | VSHLLTT.U8 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vshlltq_m[_n_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| uint16x8_t a, | 1 <= imm <= 16 | VSHLLTT.U16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshlltq_x[_n_s8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | 1 <= imm <= 8 | VPST | | |
| const int imm, | p -> Rp | VSHLLTT.S8 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vshlltq_x[_n_s16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | 1 <= imm <= 16 | VPST | | |
| const int imm, | p -> Rp | VSHLLTT.S16 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshlltq_x[_n_u8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | 1 <= imm <= 8 | VPST | | |
| const int imm, | p -> Rp | VSHLLTT.U8 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vshlltq_x[_n_u16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | 1 <= imm <= 16 | VPST | | |
| const int imm, | p -> Rp | VSHLLTT.U16 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vshlq[_s8]( | a -> Qm | VSHL.S8 Qd, Qm, Qn | Qd -> result | |
| int8x16_t a, | b -> Qn | | | |
| int8x16_t b) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vshlq[_s16]( | a -> Qm | VSHL.S16 Qd, Qm, Qn | Qd -> result | |
| int16x8_t a, | b -> Qn | | | |
| int16x8_t b) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vshlq[_s32]( | a -> Qm | VSHL.S32 Qd, Qm, Qn | Qd -> result | |
| int32x4_t a, | b -> Qn | | | |
| int32x4_t b) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vshlq[_u8]( | a -> Qm | VSHL.U8 Qd, Qm, Qn | Qd -> result | |
| uint8x16_t a, | b -> Qn | | | |
| int8x16_t b) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vshlq[_u16]( | a -> Qm | VSHL.U16 Qd, Qm, Qn | Qd -> result | |
| uint16x8_t a, | b -> Qn | | | |
| int16x8_t b) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vshlq[_u32]( | a -> Qm | VSHL.U32 Qd, Qm, Qn | Qd -> result | |
| uint32x4_t a, | b -> Qn | | | |
| int32x4_t b) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vshlq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | b -> Qn | VSHLT.S8 Qd, Qm, Qn | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshlq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | b -> Qn | VSHLT.S16 Qd, Qm, Qn | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vshlq_m[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| int32x4_t a, | b -> Qn | VSHLT.S32 Qd, Qm, Qn | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vshlq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qm | VPST | | |
| uint8x16_t a, | b -> Qn | VSHLT.U8 Qd, Qm, Qn | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshlq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| uint16x8_t a, | b -> Qn | VSHLT.U16 Qd, Qm, Qn | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vshlq_m[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| uint32x4_t a, | b -> Qn | VSHLT.U32 Qd, Qm, Qn | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vshlq_x[_s8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qn | VPST | | |
| int8x16_t b, | p -> Rp | VSHLT.S8 Qd, Qm, Qn | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshlq_x[_s16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qn | VPST | | |
| int16x8_t b, | p -> Rp | VSHLT.S16 Qd, Qm, Qn | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vshlq_x[_s32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qn | VPST | | |
| int32x4_t b, | p -> Rp | VSHLT.S32 Qd, Qm, Qn | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vshlq_x[_u8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qn | VPST | | |
| int8x16_t b, | p -> Rp | VSHLT.U8 Qd, Qm, Qn | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshlq_x[_u16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qn | VPST | | |
| int16x8_t b, | p -> Rp | VSHLT.U16 Qd, Qm, Qn | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vshlq_x[_u32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qn | VPST | | |
| int32x4_t b, | p -> Rp | VSHLT.U32 Qd, Qm, Qn | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vshlq_n[_s8]( | a -> Qm | VSHL.S8 Qd, Qm, #imm | Qd -> result | |
| int8x16_t a, | 0 <= imm <= 7 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshlq_n[_s16]( | a -> Qm | VSHL.S16 Qd, Qm, #imm | Qd -> result | |
| int16x8_t a, | 0 <= imm <= 15 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vshlq_n[_s32]( | a -> Qm | VSHL.S32 Qd, Qm, #imm | Qd -> result | |
| int32x4_t a, | 0 <= imm <= 31 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vshlq_n[_u8]( | a -> Qm | VSHL.U8 Qd, Qm, #imm | Qd -> result | |
| uint8x16_t a, | 0 <= imm <= 7 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshlq_n[_u16]( | a -> Qm | VSHL.U16 Qd, Qm, #imm | Qd -> result | |
| uint16x8_t a, | 0 <= imm <= 15 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vshlq_n[_u32]( | a -> Qm | VSHL.U32 Qd, Qm, #imm | Qd -> result | |
| uint32x4_t a, | 0 <= imm <= 31 | | | |
| const int imm) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vshlq_m_n[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | 0 <= imm <= 7 | VSHLT.S8 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshlq_m_n[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | 0 <= imm <= 15 | VSHLT.S16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vshlq_m_n[_s32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| int32x4_t a, | 0 <= imm <= 31 | VSHLT.S32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vshlq_m_n[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t inactive, | a -> Qm | VPST | | |
| uint8x16_t a, | 0 <= imm <= 7 | VSHLT.U8 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshlq_m_n[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| uint16x8_t a, | 0 <= imm <= 15 | VSHLT.U16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vshlq_m_n[_u32]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| uint32x4_t a, | 0 <= imm <= 31 | VSHLT.U32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vshlq_x_n[_s8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | 0 <= imm <= 7 | VPST | | |
| const int imm, | p -> Rp | VSHLT.S8 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshlq_x_n[_s16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | 0 <= imm <= 15 | VPST | | |
| const int imm, | p -> Rp | VSHLT.S16 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vshlq_x_n[_s32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | 0 <= imm <= 31 | VPST | | |
| const int imm, | p -> Rp | VSHLT.S32 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vshlq_x_n[_u8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | 0 <= imm <= 7 | VPST | | |
| const int imm, | p -> Rp | VSHLT.U8 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshlq_x_n[_u16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | 0 <= imm <= 15 | VPST | | |
| const int imm, | p -> Rp | VSHLT.U16 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vshlq_x_n[_u32]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | 0 <= imm <= 31 | VPST | | |
| const int imm, | p -> Rp | VSHLT.U32 Qd, Qm, #imm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vshlq_r[_s8]( | a -> Qda | VSHL.S8 Qda, Rm | Qda -> result | |
| int8x16_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshlq_r[_s16]( | a -> Qda | VSHL.S16 Qda, Rm | Qda -> result | |
| int16x8_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vshlq_r[_s32]( | a -> Qda | VSHL.S32 Qda, Rm | Qda -> result | |
| int32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vshlq_r[_u8]( | a -> Qda | VSHL.U8 Qda, Rm | Qda -> result | |
| uint8x16_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshlq_r[_u16]( | a -> Qda | VSHL.U16 Qda, Rm | Qda -> result | |
| uint16x8_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vshlq_r[_u32]( | a -> Qda | VSHL.U32 Qda, Rm | Qda -> result | |
| uint32x4_t a, | b -> Rm | | | |
| int32_t b) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vshlq_m_r[_s8]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| int8x16_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VSHLT.S8 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vshlq_m_r[_s16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| int16x8_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VSHLT.S16 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vshlq_m_r[_s32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| int32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VSHLT.S32 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vshlq_m_r[_u8]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint8x16_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VSHLT.U8 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vshlq_m_r[_u16]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint16x8_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VSHLT.U16 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vshlq_m_r[_u32]( | a -> Qda | VMSR P0, Rp | Qda -> result | |
| uint32x4_t a, | b -> Rm | VPST | | |
| int32_t b, | p -> Rp | VSHLT.U32 Qda, Rm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+------------------------------+-------------------+---------------------------+
Vector shift left and insert
----------------------------
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+=========================================+========================+===========================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int8x16_t [__arm_]vsliq[_n_s8]( | a -> Qd | VSLI.8 Qd, Qm, #imm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int8x16_t b, | 0 <= imm <= 7 | | | |
| const int imm) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int16x8_t [__arm_]vsliq[_n_s16]( | a -> Qd | VSLI.16 Qd, Qm, #imm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int16x8_t b, | 0 <= imm <= 15 | | | |
| const int imm) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| int32x4_t [__arm_]vsliq[_n_s32]( | a -> Qd | VSLI.32 Qd, Qm, #imm | Qd -> result | |
| int32x4_t a, | b -> Qm | | | |
| int32x4_t b, | 0 <= imm <= 31 | | | |
| const int imm) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint8x16_t [__arm_]vsliq[_n_u8]( | a -> Qd | VSLI.8 Qd, Qm, #imm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint8x16_t b, | 0 <= imm <= 7 | | | |
| const int imm) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint16x8_t [__arm_]vsliq[_n_u16]( | a -> Qd | VSLI.16 Qd, Qm, #imm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint16x8_t b, | 0 <= imm <= 15 | | | |
| const int imm) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE/NEON`` |
| | | | | |
| uint32x4_t [__arm_]vsliq[_n_u32]( | a -> Qd | VSLI.32 Qd, Qm, #imm | Qd -> result | |
| uint32x4_t a, | b -> Qm | | | |
| uint32x4_t b, | 0 <= imm <= 31 | | | |
| const int imm) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vsliq_m[_n_s8]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int8x16_t b, | 0 <= imm <= 7 | VSLIT.8 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vsliq_m[_n_s16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int16x8_t b, | 0 <= imm <= 15 | VSLIT.16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vsliq_m[_n_s32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPST | | |
| int32x4_t b, | 0 <= imm <= 31 | VSLIT.32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vsliq_m[_n_u8]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint8x16_t b, | 0 <= imm <= 7 | VSLIT.8 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vsliq_m[_n_u16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | 0 <= imm <= 15 | VSLIT.16 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vsliq_m[_n_u32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | 0 <= imm <= 31 | VSLIT.32 Qd, Qm, #imm | | |
| const int imm, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+-----------------------------------------+------------------------+---------------------------+------------------+---------------------------+
Move
====
Vector move
~~~~~~~~~~~
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+====================================================+========================+========================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmovlbq[_s8](int8x16_t a) | a -> Qm | VMOVLB.S8 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmovlbq[_s16](int16x8_t a) | a -> Qm | VMOVLB.S16 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmovlbq[_u8](uint8x16_t a) | a -> Qm | VMOVLB.U8 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmovlbq[_u16](uint16x8_t a) | a -> Qm | VMOVLB.U16 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmovlbq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | p -> Rp | VMOVLBT.S8 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmovlbq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | p -> Rp | VMOVLBT.S16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmovlbq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| uint8x16_t a, | p -> Rp | VMOVLBT.U8 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmovlbq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| uint16x8_t a, | p -> Rp | VMOVLBT.U16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmovlbq_x[_s8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VMOVLBT.S8 Qd, Qm | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmovlbq_x[_s16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VMOVLBT.S16 Qd, Qm | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmovlbq_x[_u8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VMOVLBT.U8 Qd, Qm | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmovlbq_x[_u16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VMOVLBT.U16 Qd, Qm | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmovltq[_s8](int8x16_t a) | a -> Qm | VMOVLT.S8 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmovltq[_s16](int16x8_t a) | a -> Qm | VMOVLT.S16 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmovltq[_u8](uint8x16_t a) | a -> Qm | VMOVLT.U8 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmovltq[_u16](uint16x8_t a) | a -> Qm | VMOVLT.U16 Qd, Qm | Qd -> result | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmovltq_m[_s8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t inactive, | a -> Qm | VPST | | |
| int8x16_t a, | p -> Rp | VMOVLTT.S8 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmovltq_m[_s16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| int32x4_t inactive, | a -> Qm | VPST | | |
| int16x8_t a, | p -> Rp | VMOVLTT.S16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmovltq_m[_u8]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t inactive, | a -> Qm | VPST | | |
| uint8x16_t a, | p -> Rp | VMOVLTT.U8 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmovltq_m[_u16]( | inactive -> Qd | VMSR P0, Rp | Qd -> result | |
| uint32x4_t inactive, | a -> Qm | VPST | | |
| uint16x8_t a, | p -> Rp | VMOVLTT.U16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmovltq_x[_s8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VMOVLTT.S8 Qd, Qm | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vmovltq_x[_s16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VMOVLTT.S16 Qd, Qm | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmovltq_x[_u8]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VMOVLTT.U8 Qd, Qm | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vmovltq_x[_u16]( | a -> Qm | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VMOVLTT.U16 Qd, Qm | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vmovnbq[_s16]( | a -> Qd | VMOVNB.I16 Qd, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmovnbq[_s32]( | a -> Qd | VMOVNB.I32 Qd, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmovnbq[_u16]( | a -> Qd | VMOVNB.I16 Qd, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmovnbq[_u32]( | a -> Qd | VMOVNB.I32 Qd, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vmovnbq_m[_s16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VMOVNBT.I16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmovnbq_m[_s32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VMOVNBT.I32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmovnbq_m[_u16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VMOVNBT.I16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmovnbq_m[_u32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VMOVNBT.I32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vmovntq[_s16]( | a -> Qd | VMOVNT.I16 Qd, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmovntq[_s32]( | a -> Qd | VMOVNT.I32 Qd, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmovntq[_u16]( | a -> Qd | VMOVNT.I16 Qd, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmovntq[_u32]( | a -> Qd | VMOVNT.I32 Qd, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vmovntq_m[_s16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VMOVNTT.I16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vmovntq_m[_s32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VMOVNTT.I32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vmovntq_m[_u16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VMOVNTT.I16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vmovntq_m[_u32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VMOVNTT.I32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+----------------------------------------------------+------------------------+------------------------+------------------+---------------------------+
Vector saturating move and narrow
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+===========================================+========================+==========================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqmovnbq[_s16]( | a -> Qd | VQMOVNB.S16 Qd, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqmovnbq[_s32]( | a -> Qd | VQMOVNB.S32 Qd, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqmovnbq[_u16]( | a -> Qd | VQMOVNB.U16 Qd, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqmovnbq[_u32]( | a -> Qd | VQMOVNB.U32 Qd, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqmovnbq_m[_s16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VQMOVNBT.S16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqmovnbq_m[_s32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VQMOVNBT.S32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqmovnbq_m[_u16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VQMOVNBT.U16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqmovnbq_m[_u32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VQMOVNBT.U32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqmovntq[_s16]( | a -> Qd | VQMOVNT.S16 Qd, Qm | Qd -> result | |
| int8x16_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqmovntq[_s32]( | a -> Qd | VQMOVNT.S32 Qd, Qm | Qd -> result | |
| int16x8_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqmovntq[_u16]( | a -> Qd | VQMOVNT.U16 Qd, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| uint16x8_t b) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqmovntq[_u32]( | a -> Qd | VQMOVNT.U32 Qd, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| uint32x4_t b) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vqmovntq_m[_s16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VQMOVNTT.S16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vqmovntq_m[_s32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VQMOVNTT.S32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqmovntq_m[_u16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| uint16x8_t b, | p -> Rp | VQMOVNTT.U16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqmovntq_m[_u32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| uint32x4_t b, | p -> Rp | VQMOVNTT.U32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqmovunbq[_s16]( | a -> Qd | VQMOVUNB.S16 Qd, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqmovunbq[_s32]( | a -> Qd | VQMOVUNB.S32 Qd, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqmovunbq_m[_s16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VQMOVUNBT.S16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqmovunbq_m[_s32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VQMOVUNBT.S32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqmovuntq[_s16]( | a -> Qd | VQMOVUNT.S16 Qd, Qm | Qd -> result | |
| uint8x16_t a, | b -> Qm | | | |
| int16x8_t b) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqmovuntq[_s32]( | a -> Qd | VQMOVUNT.S32 Qd, Qm | Qd -> result | |
| uint16x8_t a, | b -> Qm | | | |
| int32x4_t b) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vqmovuntq_m[_s16]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPST | | |
| int16x8_t b, | p -> Rp | VQMOVUNTT.S16 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vqmovuntq_m[_s32]( | a -> Qd | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPST | | |
| int32x4_t b, | p -> Rp | VQMOVUNTT.S32 Qd, Qm | | |
| mve_pred16_t p) | | | | |
+-------------------------------------------+------------------------+--------------------------+------------------+---------------------------+
Predication
===========
Vector Predicate NOT
~~~~~~~~~~~~~~~~~~~~
+------------------------------------------------+------------------------+------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+================================================+========================+==================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vpnot(mve_pred16_t a) | a -> Rp | VMSR P0, Rp | Rt -> result | |
| | | VPNOT | | |
| | | VMRS Rt, P0 | | |
+------------------------------------------------+------------------------+------------------+------------------+---------------------------+
Predicated select
~~~~~~~~~~~~~~~~~
+---------------------------------------+------------------------+----------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+=======================================+========================+======================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int8x16_t [__arm_]vpselq[_s8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int8x16_t a, | b -> Qm | VPSEL Qd, Qn, Qm | | |
| int8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int16x8_t [__arm_]vpselq[_s16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int16x8_t a, | b -> Qm | VPSEL Qd, Qn, Qm | | |
| int16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32x4_t [__arm_]vpselq[_s32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int32x4_t a, | b -> Qm | VPSEL Qd, Qn, Qm | | |
| int32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64x2_t [__arm_]vpselq[_s64]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| int64x2_t a, | b -> Qm | VPSEL Qd, Qn, Qm | | |
| int64x2_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint8x16_t [__arm_]vpselq[_u8]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint8x16_t a, | b -> Qm | VPSEL Qd, Qn, Qm | | |
| uint8x16_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint16x8_t [__arm_]vpselq[_u16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint16x8_t a, | b -> Qm | VPSEL Qd, Qn, Qm | | |
| uint16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32x4_t [__arm_]vpselq[_u32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint32x4_t a, | b -> Qm | VPSEL Qd, Qn, Qm | | |
| uint32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64x2_t [__arm_]vpselq[_u64]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| uint64x2_t a, | b -> Qm | VPSEL Qd, Qn, Qm | | |
| uint64x2_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float16x8_t [__arm_]vpselq[_f16]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float16x8_t a, | b -> Qm | VPSEL Qd, Qn, Qm | | |
| float16x8_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------+------------------------+----------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| float32x4_t [__arm_]vpselq[_f32]( | a -> Qn | VMSR P0, Rp | Qd -> result | |
| float32x4_t a, | b -> Qm | VPSEL Qd, Qn, Qm | | |
| float32x4_t b, | p -> Rp | | | |
| mve_pred16_t p) | | | | |
+---------------------------------------+------------------------+----------------------+------------------+---------------------------+
Create vector tail predicate
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+----------------------------------------------+------------------------+------------------+------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+==============================================+========================+==================+==================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vctp8q(uint32_t a) | a -> Rn | VCTP.8 Rn | Rd -> result | |
| | | VMRS Rd, P0 | | |
+----------------------------------------------+------------------------+------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vctp16q(uint32_t a) | a -> Rn | VCTP.16 Rn | Rd -> result | |
| | | VMRS Rd, P0 | | |
+----------------------------------------------+------------------------+------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vctp32q(uint32_t a) | a -> Rn | VCTP.32 Rn | Rd -> result | |
| | | VMRS Rd, P0 | | |
+----------------------------------------------+------------------------+------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vctp64q(uint32_t a) | a -> Rn | VCTP.64 Rn | Rd -> result | |
| | | VMRS Rd, P0 | | |
+----------------------------------------------+------------------------+------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vctp8q_m( | a -> Rn | VMSR P0, Rp | Rd -> result | |
| uint32_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCTPT.8 Rn | | |
| | | VMRS Rd, P0 | | |
+----------------------------------------------+------------------------+------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vctp16q_m( | a -> Rn | VMSR P0, Rp | Rd -> result | |
| uint32_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCTPT.16 Rn | | |
| | | VMRS Rd, P0 | | |
+----------------------------------------------+------------------------+------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vctp32q_m( | a -> Rn | VMSR P0, Rp | Rd -> result | |
| uint32_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCTPT.32 Rn | | |
| | | VMRS Rd, P0 | | |
+----------------------------------------------+------------------------+------------------+------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| mve_pred16_t [__arm_]vctp64q_m( | a -> Rn | VMSR P0, Rp | Rd -> result | |
| uint32_t a, | p -> Rp | VPST | | |
| mve_pred16_t p) | | VCTPT.64 Rn | | |
| | | VMRS Rd, P0 | | |
+----------------------------------------------+------------------------+------------------+------------------+---------------------------+
64-bit arithmetic
=================
Logical shift left long
~~~~~~~~~~~~~~~~~~~~~~~
+----------------------------+-----------------------------+---------------------------+-----------------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+============================+=============================+===========================+=============================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]lsll( | value -> [RdaHi,RdaLo] | LSLL RdaLo, RdaHi, Rm | [RdaHi,RdaLo] -> result | |
| uint64_t value, | shift -> Rm | | | |
| int32_t shift) | | | | |
+----------------------------+-----------------------------+---------------------------+-----------------------------+---------------------------+
Arithmetic shift right long
~~~~~~~~~~~~~~~~~~~~~~~~~~~
+---------------------------+-----------------------------+---------------------------+-----------------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+===========================+=============================+===========================+=============================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]asrl( | value -> [RdaHi,RdaLo] | ASRL RdaLo, RdaHi, Rm | [RdaHi,RdaLo] -> result | |
| int64_t value, | shift -> Rm | | | |
| int32_t shift) | | | | |
+---------------------------+-----------------------------+---------------------------+-----------------------------+---------------------------+
Saturating rounding shift right long
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+====================================+=============================+===================================+=============================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]sqrshrl( | value -> [RdaHi,RdaLo] | SQRSHRL RdaLo, RdaHi, #64, Rm | [RdaHi,RdaLo] -> result | |
| int64_t value, | shift -> Rm | | | |
| int32_t shift) | | | | |
+------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]sqrshrl_sat48( | value -> [RdaHi,RdaLo] | SQRSHRL RdaLo, RdaHi, #48, Rm | [RdaHi,RdaLo] -> result | |
| int64_t value, | shift -> Rm | | | |
| int32_t shift) | | | | |
+------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]sqrshr( | value -> Rda | SQRSHR Rda, Rm | Rda -> result | |
| int32_t value, | shift -> Rm | | | |
| int32_t shift) | | | | |
+------------------------------------+-----------------------------+-----------------------------------+-----------------------------+---------------------------+
Rounding shift right long
~~~~~~~~~~~~~~~~~~~~~~~~~
+------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| Intrinsic | Argument preparation | Instruction | Result | Supported architectures |
+==============================+=============================+=================================+=============================+===========================+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint64_t [__arm_]urshrl( | value -> [RdaHi,RdaLo] | URSHRL RdaLo, RdaHi, #shift | [RdaHi,RdaLo] -> result | |
| uint64_t value, | 1 <= shift <= 32 | | | |
| const int shift) | | | | |
+------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int64_t [__arm_]srshrl( | value -> [RdaHi,RdaLo] | SRSHRL RdaLo, RdaHi, #shift | [RdaHi,RdaLo] -> result | |
| int64_t value, | 1 <= shift <= 32 | | | |
| const int shift) | | | | |
+------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| uint32_t [__arm_]urshr( | value -> Rda | URSHR Rda, #shift | Rda -> result | |
| uint32_t value, | 1 <= shift <= 32 | | | |
| const int shift) | | | | |
+------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+
| .. code:: c | :: | :: | :: | ``MVE`` |
| | | | | |
| int32_t [__arm_]srshr( | value -> Rda | SRSHR Rda, #shift | Rda -> result | |
| int32_t value, | 1 <= shift <= 32 | | | |
| const int shift) | | | | |
+------------------------------+-----------------------------+---------------------------------+-----------------------------+---------------------------+