Project import
diff --git a/aarch64-linux-android-4.9/COPYING b/aarch64-linux-android-4.9/COPYING new file mode 100644 index 0000000..623b625 --- /dev/null +++ b/aarch64-linux-android-4.9/COPYING
@@ -0,0 +1,340 @@ + GNU GENERAL PUBLIC LICENSE + Version 2, June 1991 + + Copyright (C) 1989, 1991 Free Software Foundation, Inc. + 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The licenses for most software are designed to take away your +freedom to share and change it. By contrast, the GNU General Public +License is intended to guarantee your freedom to share and change free +software--to make sure the software is free for all its users. This +General Public License applies to most of the Free Software +Foundation's software and to any other program whose authors commit to +using it. (Some other Free Software Foundation software is covered by +the GNU Library General Public License instead.) You can apply it to +your programs, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +this service if you wish), that you receive source code or can get it +if you want it, that you can change the software or use pieces of it +in new free programs; and that you know you can do these things. + + To protect your rights, we need to make restrictions that forbid +anyone to deny you these rights or to ask you to surrender the rights. +These restrictions translate to certain responsibilities for you if you +distribute copies of the software, or if you modify it. + + For example, if you distribute copies of such a program, whether +gratis or for a fee, you must give the recipients all the rights that +you have. You must make sure that they, too, receive or can get the +source code. And you must show them these terms so they know their +rights. + + We protect your rights with two steps: (1) copyright the software, and +(2) offer you this license which gives you legal permission to copy, +distribute and/or modify the software. + + Also, for each author's protection and ours, we want to make certain +that everyone understands that there is no warranty for this free +software. If the software is modified by someone else and passed on, we +want its recipients to know that what they have is not the original, so +that any problems introduced by others will not reflect on the original +authors' reputations. + + Finally, any free program is threatened constantly by software +patents. We wish to avoid the danger that redistributors of a free +program will individually obtain patent licenses, in effect making the +program proprietary. To prevent this, we have made it clear that any +patent must be licensed for everyone's free use or not licensed at all. + + The precise terms and conditions for copying, distribution and +modification follow. + + GNU GENERAL PUBLIC LICENSE + TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION + + 0. This License applies to any program or other work which contains +a notice placed by the copyright holder saying it may be distributed +under the terms of this General Public License. The "Program", below, +refers to any such program or work, and a "work based on the Program" +means either the Program or any derivative work under copyright law: +that is to say, a work containing the Program or a portion of it, +either verbatim or with modifications and/or translated into another +language. (Hereinafter, translation is included without limitation in +the term "modification".) Each licensee is addressed as "you". + +Activities other than copying, distribution and modification are not +covered by this License; they are outside its scope. The act of +running the Program is not restricted, and the output from the Program +is covered only if its contents constitute a work based on the +Program (independent of having been made by running the Program). +Whether that is true depends on what the Program does. + + 1. You may copy and distribute verbatim copies of the Program's +source code as you receive it, in any medium, provided that you +conspicuously and appropriately publish on each copy an appropriate +copyright notice and disclaimer of warranty; keep intact all the +notices that refer to this License and to the absence of any warranty; +and give any other recipients of the Program a copy of this License +along with the Program. + +You may charge a fee for the physical act of transferring a copy, and +you may at your option offer warranty protection in exchange for a fee. + + 2. You may modify your copy or copies of the Program or any portion +of it, thus forming a work based on the Program, and copy and +distribute such modifications or work under the terms of Section 1 +above, provided that you also meet all of these conditions: + + a) You must cause the modified files to carry prominent notices + stating that you changed the files and the date of any change. + + b) You must cause any work that you distribute or publish, that in + whole or in part contains or is derived from the Program or any + part thereof, to be licensed as a whole at no charge to all third + parties under the terms of this License. + + c) If the modified program normally reads commands interactively + when run, you must cause it, when started running for such + interactive use in the most ordinary way, to print or display an + announcement including an appropriate copyright notice and a + notice that there is no warranty (or else, saying that you provide + a warranty) and that users may redistribute the program under + these conditions, and telling the user how to view a copy of this + License. (Exception: if the Program itself is interactive but + does not normally print such an announcement, your work based on + the Program is not required to print an announcement.) + +These requirements apply to the modified work as a whole. If +identifiable sections of that work are not derived from the Program, +and can be reasonably considered independent and separate works in +themselves, then this License, and its terms, do not apply to those +sections when you distribute them as separate works. But when you +distribute the same sections as part of a whole which is a work based +on the Program, the distribution of the whole must be on the terms of +this License, whose permissions for other licensees extend to the +entire whole, and thus to each and every part regardless of who wrote it. + +Thus, it is not the intent of this section to claim rights or contest +your rights to work written entirely by you; rather, the intent is to +exercise the right to control the distribution of derivative or +collective works based on the Program. + +In addition, mere aggregation of another work not based on the Program +with the Program (or with a work based on the Program) on a volume of +a storage or distribution medium does not bring the other work under +the scope of this License. + + 3. You may copy and distribute the Program (or a work based on it, +under Section 2) in object code or executable form under the terms of +Sections 1 and 2 above provided that you also do one of the following: + + a) Accompany it with the complete corresponding machine-readable + source code, which must be distributed under the terms of Sections + 1 and 2 above on a medium customarily used for software interchange; or, + + b) Accompany it with a written offer, valid for at least three + years, to give any third party, for a charge no more than your + cost of physically performing source distribution, a complete + machine-readable copy of the corresponding source code, to be + distributed under the terms of Sections 1 and 2 above on a medium + customarily used for software interchange; or, + + c) Accompany it with the information you received as to the offer + to distribute corresponding source code. (This alternative is + allowed only for noncommercial distribution and only if you + received the program in object code or executable form with such + an offer, in accord with Subsection b above.) + +The source code for a work means the preferred form of the work for +making modifications to it. For an executable work, complete source +code means all the source code for all modules it contains, plus any +associated interface definition files, plus the scripts used to +control compilation and installation of the executable. However, as a +special exception, the source code distributed need not include +anything that is normally distributed (in either source or binary +form) with the major components (compiler, kernel, and so on) of the +operating system on which the executable runs, unless that component +itself accompanies the executable. + +If distribution of executable or object code is made by offering +access to copy from a designated place, then offering equivalent +access to copy the source code from the same place counts as +distribution of the source code, even though third parties are not +compelled to copy the source along with the object code. + + 4. You may not copy, modify, sublicense, or distribute the Program +except as expressly provided under this License. Any attempt +otherwise to copy, modify, sublicense or distribute the Program is +void, and will automatically terminate your rights under this License. +However, parties who have received copies, or rights, from you under +this License will not have their licenses terminated so long as such +parties remain in full compliance. + + 5. You are not required to accept this License, since you have not +signed it. However, nothing else grants you permission to modify or +distribute the Program or its derivative works. These actions are +prohibited by law if you do not accept this License. Therefore, by +modifying or distributing the Program (or any work based on the +Program), you indicate your acceptance of this License to do so, and +all its terms and conditions for copying, distributing or modifying +the Program or works based on it. + + 6. Each time you redistribute the Program (or any work based on the +Program), the recipient automatically receives a license from the +original licensor to copy, distribute or modify the Program subject to +these terms and conditions. You may not impose any further +restrictions on the recipients' exercise of the rights granted herein. +You are not responsible for enforcing compliance by third parties to +this License. + + 7. If, as a consequence of a court judgment or allegation of patent +infringement or for any other reason (not limited to patent issues), +conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot +distribute so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you +may not distribute the Program at all. For example, if a patent +license would not permit royalty-free redistribution of the Program by +all those who receive copies directly or indirectly through you, then +the only way you could satisfy both it and this License would be to +refrain entirely from distribution of the Program. + +If any portion of this section is held invalid or unenforceable under +any particular circumstance, the balance of the section is intended to +apply and the section as a whole is intended to apply in other +circumstances. + +It is not the purpose of this section to induce you to infringe any +patents or other property right claims or to contest validity of any +such claims; this section has the sole purpose of protecting the +integrity of the free software distribution system, which is +implemented by public license practices. Many people have made +generous contributions to the wide range of software distributed +through that system in reliance on consistent application of that +system; it is up to the author/donor to decide if he or she is willing +to distribute software through any other system and a licensee cannot +impose that choice. + +This section is intended to make thoroughly clear what is believed to +be a consequence of the rest of this License. + + 8. If the distribution and/or use of the Program is restricted in +certain countries either by patents or by copyrighted interfaces, the +original copyright holder who places the Program under this License +may add an explicit geographical distribution limitation excluding +those countries, so that distribution is permitted only in or among +countries not thus excluded. In such case, this License incorporates +the limitation as if written in the body of this License. + + 9. The Free Software Foundation may publish revised and/or new versions +of the General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + +Each version is given a distinguishing version number. If the Program +specifies a version number of this License which applies to it and "any +later version", you have the option of following the terms and conditions +either of that version or of any later version published by the Free +Software Foundation. If the Program does not specify a version number of +this License, you may choose any version ever published by the Free Software +Foundation. + + 10. If you wish to incorporate parts of the Program into other free +programs whose distribution conditions are different, write to the author +to ask for permission. For software which is copyrighted by the Free +Software Foundation, write to the Free Software Foundation; we sometimes +make exceptions for this. Our decision will be guided by the two goals +of preserving the free status of all derivatives of our free software and +of promoting the sharing and reuse of software generally. + + NO WARRANTY + + 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY +FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN +OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES +PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED +OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF +MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS +TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE +PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, +REPAIR OR CORRECTION. + + 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR +REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, +INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING +OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED +TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY +YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER +PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE +POSSIBILITY OF SUCH DAMAGES. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +convey the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + <one line to give the program's name and a brief idea of what it does.> + Copyright (C) <year> <name of author> + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + + +Also add information on how to contact you by electronic and paper mail. + +If the program is interactive, make it output a short notice like this +when it starts in an interactive mode: + + Gnomovision version 69, Copyright (C) year name of author + Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, the commands you use may +be called something other than `show w' and `show c'; they could even be +mouse-clicks or menu items--whatever suits your program. + +You should also get your employer (if you work as a programmer) or your +school, if any, to sign a "copyright disclaimer" for the program, if +necessary. Here is a sample; alter the names: + + Yoyodyne, Inc., hereby disclaims all copyright interest in the program + `Gnomovision' (which makes passes at compilers) written by James Hacker. + + <signature of Ty Coon>, 1 April 1989 + Ty Coon, President of Vice + +This General Public License does not permit incorporating your program into +proprietary programs. If your program is a subroutine library, you may +consider it more useful to permit linking proprietary applications with the +library. If this is what you want to do, use the GNU Library General +Public License instead of this License.
diff --git a/aarch64-linux-android-4.9/COPYING.LIB b/aarch64-linux-android-4.9/COPYING.LIB new file mode 100644 index 0000000..2d2d780 --- /dev/null +++ b/aarch64-linux-android-4.9/COPYING.LIB
@@ -0,0 +1,510 @@ + + GNU LESSER GENERAL PUBLIC LICENSE + Version 2.1, February 1999 + + Copyright (C) 1991, 1999 Free Software Foundation, Inc. + 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + +[This is the first released version of the Lesser GPL. It also counts + as the successor of the GNU Library Public License, version 2, hence + the version number 2.1.] + + Preamble + + The licenses for most software are designed to take away your +freedom to share and change it. By contrast, the GNU General Public +Licenses are intended to guarantee your freedom to share and change +free software--to make sure the software is free for all its users. + + This license, the Lesser General Public License, applies to some +specially designated software packages--typically libraries--of the +Free Software Foundation and other authors who decide to use it. You +can use it too, but we suggest you first think carefully about whether +this license or the ordinary General Public License is the better +strategy to use in any particular case, based on the explanations +below. + + When we speak of free software, we are referring to freedom of use, +not price. Our General Public Licenses are designed to make sure that +you have the freedom to distribute copies of free software (and charge +for this service if you wish); that you receive source code or can get +it if you want it; that you can change the software and use pieces of +it in new free programs; and that you are informed that you can do +these things. + + To protect your rights, we need to make restrictions that forbid +distributors to deny you these rights or to ask you to surrender these +rights. These restrictions translate to certain responsibilities for +you if you distribute copies of the library or if you modify it. + + For example, if you distribute copies of the library, whether gratis +or for a fee, you must give the recipients all the rights that we gave +you. You must make sure that they, too, receive or can get the source +code. If you link other code with the library, you must provide +complete object files to the recipients, so that they can relink them +with the library after making changes to the library and recompiling +it. And you must show them these terms so they know their rights. + + We protect your rights with a two-step method: (1) we copyright the +library, and (2) we offer you this license, which gives you legal +permission to copy, distribute and/or modify the library. + + To protect each distributor, we want to make it very clear that +there is no warranty for the free library. Also, if the library is +modified by someone else and passed on, the recipients should know +that what they have is not the original version, so that the original +author's reputation will not be affected by problems that might be +introduced by others. + + Finally, software patents pose a constant threat to the existence of +any free program. We wish to make sure that a company cannot +effectively restrict the users of a free program by obtaining a +restrictive license from a patent holder. Therefore, we insist that +any patent license obtained for a version of the library must be +consistent with the full freedom of use specified in this license. + + Most GNU software, including some libraries, is covered by the +ordinary GNU General Public License. This license, the GNU Lesser +General Public License, applies to certain designated libraries, and +is quite different from the ordinary General Public License. We use +this license for certain libraries in order to permit linking those +libraries into non-free programs. + + When a program is linked with a library, whether statically or using +a shared library, the combination of the two is legally speaking a +combined work, a derivative of the original library. The ordinary +General Public License therefore permits such linking only if the +entire combination fits its criteria of freedom. The Lesser General +Public License permits more lax criteria for linking other code with +the library. + + We call this license the "Lesser" General Public License because it +does Less to protect the user's freedom than the ordinary General +Public License. It also provides other free software developers Less +of an advantage over competing non-free programs. These disadvantages +are the reason we use the ordinary General Public License for many +libraries. However, the Lesser license provides advantages in certain +special circumstances. + + For example, on rare occasions, there may be a special need to +encourage the widest possible use of a certain library, so that it +becomes a de-facto standard. To achieve this, non-free programs must +be allowed to use the library. A more frequent case is that a free +library does the same job as widely used non-free libraries. In this +case, there is little to gain by limiting the free library to free +software only, so we use the Lesser General Public License. + + In other cases, permission to use a particular library in non-free +programs enables a greater number of people to use a large body of +free software. For example, permission to use the GNU C Library in +non-free programs enables many more people to use the whole GNU +operating system, as well as its variant, the GNU/Linux operating +system. + + Although the Lesser General Public License is Less protective of the +users' freedom, it does ensure that the user of a program that is +linked with the Library has the freedom and the wherewithal to run +that program using a modified version of the Library. + + The precise terms and conditions for copying, distribution and +modification follow. Pay close attention to the difference between a +"work based on the library" and a "work that uses the library". The +former contains code derived from the library, whereas the latter must +be combined with the library in order to run. + + GNU LESSER GENERAL PUBLIC LICENSE + TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION + + 0. This License Agreement applies to any software library or other +program which contains a notice placed by the copyright holder or +other authorized party saying it may be distributed under the terms of +this Lesser General Public License (also called "this License"). +Each licensee is addressed as "you". + + A "library" means a collection of software functions and/or data +prepared so as to be conveniently linked with application programs +(which use some of those functions and data) to form executables. + + The "Library", below, refers to any such software library or work +which has been distributed under these terms. A "work based on the +Library" means either the Library or any derivative work under +copyright law: that is to say, a work containing the Library or a +portion of it, either verbatim or with modifications and/or translated +straightforwardly into another language. (Hereinafter, translation is +included without limitation in the term "modification".) + + "Source code" for a work means the preferred form of the work for +making modifications to it. For a library, complete source code means +all the source code for all modules it contains, plus any associated +interface definition files, plus the scripts used to control +compilation and installation of the library. + + Activities other than copying, distribution and modification are not +covered by this License; they are outside its scope. The act of +running a program using the Library is not restricted, and output from +such a program is covered only if its contents constitute a work based +on the Library (independent of the use of the Library in a tool for +writing it). Whether that is true depends on what the Library does +and what the program that uses the Library does. + + 1. You may copy and distribute verbatim copies of the Library's +complete source code as you receive it, in any medium, provided that +you conspicuously and appropriately publish on each copy an +appropriate copyright notice and disclaimer of warranty; keep intact +all the notices that refer to this License and to the absence of any +warranty; and distribute a copy of this License along with the +Library. + + You may charge a fee for the physical act of transferring a copy, +and you may at your option offer warranty protection in exchange for a +fee. + + 2. You may modify your copy or copies of the Library or any portion +of it, thus forming a work based on the Library, and copy and +distribute such modifications or work under the terms of Section 1 +above, provided that you also meet all of these conditions: + + a) The modified work must itself be a software library. + + b) You must cause the files modified to carry prominent notices + stating that you changed the files and the date of any change. + + c) You must cause the whole of the work to be licensed at no + charge to all third parties under the terms of this License. + + d) If a facility in the modified Library refers to a function or a + table of data to be supplied by an application program that uses + the facility, other than as an argument passed when the facility + is invoked, then you must make a good faith effort to ensure that, + in the event an application does not supply such function or + table, the facility still operates, and performs whatever part of + its purpose remains meaningful. + + (For example, a function in a library to compute square roots has + a purpose that is entirely well-defined independent of the + application. Therefore, Subsection 2d requires that any + application-supplied function or table used by this function must + be optional: if the application does not supply it, the square + root function must still compute square roots.) + +These requirements apply to the modified work as a whole. If +identifiable sections of that work are not derived from the Library, +and can be reasonably considered independent and separate works in +themselves, then this License, and its terms, do not apply to those +sections when you distribute them as separate works. But when you +distribute the same sections as part of a whole which is a work based +on the Library, the distribution of the whole must be on the terms of +this License, whose permissions for other licensees extend to the +entire whole, and thus to each and every part regardless of who wrote +it. + +Thus, it is not the intent of this section to claim rights or contest +your rights to work written entirely by you; rather, the intent is to +exercise the right to control the distribution of derivative or +collective works based on the Library. + +In addition, mere aggregation of another work not based on the Library +with the Library (or with a work based on the Library) on a volume of +a storage or distribution medium does not bring the other work under +the scope of this License. + + 3. You may opt to apply the terms of the ordinary GNU General Public +License instead of this License to a given copy of the Library. To do +this, you must alter all the notices that refer to this License, so +that they refer to the ordinary GNU General Public License, version 2, +instead of to this License. (If a newer version than version 2 of the +ordinary GNU General Public License has appeared, then you can specify +that version instead if you wish.) Do not make any other change in +these notices. + + Once this change is made in a given copy, it is irreversible for +that copy, so the ordinary GNU General Public License applies to all +subsequent copies and derivative works made from that copy. + + This option is useful when you wish to copy part of the code of +the Library into a program that is not a library. + + 4. You may copy and distribute the Library (or a portion or +derivative of it, under Section 2) in object code or executable form +under the terms of Sections 1 and 2 above provided that you accompany +it with the complete corresponding machine-readable source code, which +must be distributed under the terms of Sections 1 and 2 above on a +medium customarily used for software interchange. + + If distribution of object code is made by offering access to copy +from a designated place, then offering equivalent access to copy the +source code from the same place satisfies the requirement to +distribute the source code, even though third parties are not +compelled to copy the source along with the object code. + + 5. A program that contains no derivative of any portion of the +Library, but is designed to work with the Library by being compiled or +linked with it, is called a "work that uses the Library". Such a +work, in isolation, is not a derivative work of the Library, and +therefore falls outside the scope of this License. + + However, linking a "work that uses the Library" with the Library +creates an executable that is a derivative of the Library (because it +contains portions of the Library), rather than a "work that uses the +library". The executable is therefore covered by this License. +Section 6 states terms for distribution of such executables. + + When a "work that uses the Library" uses material from a header file +that is part of the Library, the object code for the work may be a +derivative work of the Library even though the source code is not. +Whether this is true is especially significant if the work can be +linked without the Library, or if the work is itself a library. The +threshold for this to be true is not precisely defined by law. + + If such an object file uses only numerical parameters, data +structure layouts and accessors, and small macros and small inline +functions (ten lines or less in length), then the use of the object +file is unrestricted, regardless of whether it is legally a derivative +work. (Executables containing this object code plus portions of the +Library will still fall under Section 6.) + + Otherwise, if the work is a derivative of the Library, you may +distribute the object code for the work under the terms of Section 6. +Any executables containing that work also fall under Section 6, +whether or not they are linked directly with the Library itself. + + 6. As an exception to the Sections above, you may also combine or +link a "work that uses the Library" with the Library to produce a +work containing portions of the Library, and distribute that work +under terms of your choice, provided that the terms permit +modification of the work for the customer's own use and reverse +engineering for debugging such modifications. + + You must give prominent notice with each copy of the work that the +Library is used in it and that the Library and its use are covered by +this License. You must supply a copy of this License. If the work +during execution displays copyright notices, you must include the +copyright notice for the Library among them, as well as a reference +directing the user to the copy of this License. Also, you must do one +of these things: + + a) Accompany the work with the complete corresponding + machine-readable source code for the Library including whatever + changes were used in the work (which must be distributed under + Sections 1 and 2 above); and, if the work is an executable linked + with the Library, with the complete machine-readable "work that + uses the Library", as object code and/or source code, so that the + user can modify the Library and then relink to produce a modified + executable containing the modified Library. (It is understood + that the user who changes the contents of definitions files in the + Library will not necessarily be able to recompile the application + to use the modified definitions.) + + b) Use a suitable shared library mechanism for linking with the + Library. A suitable mechanism is one that (1) uses at run time a + copy of the library already present on the user's computer system, + rather than copying library functions into the executable, and (2) + will operate properly with a modified version of the library, if + the user installs one, as long as the modified version is + interface-compatible with the version that the work was made with. + + c) Accompany the work with a written offer, valid for at least + three years, to give the same user the materials specified in + Subsection 6a, above, for a charge no more than the cost of + performing this distribution. + + d) If distribution of the work is made by offering access to copy + from a designated place, offer equivalent access to copy the above + specified materials from the same place. + + e) Verify that the user has already received a copy of these + materials or that you have already sent this user a copy. + + For an executable, the required form of the "work that uses the +Library" must include any data and utility programs needed for +reproducing the executable from it. However, as a special exception, +the materials to be distributed need not include anything that is +normally distributed (in either source or binary form) with the major +components (compiler, kernel, and so on) of the operating system on +which the executable runs, unless that component itself accompanies +the executable. + + It may happen that this requirement contradicts the license +restrictions of other proprietary libraries that do not normally +accompany the operating system. Such a contradiction means you cannot +use both them and the Library together in an executable that you +distribute. + + 7. You may place library facilities that are a work based on the +Library side-by-side in a single library together with other library +facilities not covered by this License, and distribute such a combined +library, provided that the separate distribution of the work based on +the Library and of the other library facilities is otherwise +permitted, and provided that you do these two things: + + a) Accompany the combined library with a copy of the same work + based on the Library, uncombined with any other library + facilities. This must be distributed under the terms of the + Sections above. + + b) Give prominent notice with the combined library of the fact + that part of it is a work based on the Library, and explaining + where to find the accompanying uncombined form of the same work. + + 8. You may not copy, modify, sublicense, link with, or distribute +the Library except as expressly provided under this License. Any +attempt otherwise to copy, modify, sublicense, link with, or +distribute the Library is void, and will automatically terminate your +rights under this License. However, parties who have received copies, +or rights, from you under this License will not have their licenses +terminated so long as such parties remain in full compliance. + + 9. You are not required to accept this License, since you have not +signed it. However, nothing else grants you permission to modify or +distribute the Library or its derivative works. These actions are +prohibited by law if you do not accept this License. Therefore, by +modifying or distributing the Library (or any work based on the +Library), you indicate your acceptance of this License to do so, and +all its terms and conditions for copying, distributing or modifying +the Library or works based on it. + + 10. Each time you redistribute the Library (or any work based on the +Library), the recipient automatically receives a license from the +original licensor to copy, distribute, link with or modify the Library +subject to these terms and conditions. You may not impose any further +restrictions on the recipients' exercise of the rights granted herein. +You are not responsible for enforcing compliance by third parties with +this License. + + 11. If, as a consequence of a court judgment or allegation of patent +infringement or for any other reason (not limited to patent issues), +conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot +distribute so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you +may not distribute the Library at all. For example, if a patent +license would not permit royalty-free redistribution of the Library by +all those who receive copies directly or indirectly through you, then +the only way you could satisfy both it and this License would be to +refrain entirely from distribution of the Library. + +If any portion of this section is held invalid or unenforceable under +any particular circumstance, the balance of the section is intended to +apply, and the section as a whole is intended to apply in other +circumstances. + +It is not the purpose of this section to induce you to infringe any +patents or other property right claims or to contest validity of any +such claims; this section has the sole purpose of protecting the +integrity of the free software distribution system which is +implemented by public license practices. Many people have made +generous contributions to the wide range of software distributed +through that system in reliance on consistent application of that +system; it is up to the author/donor to decide if he or she is willing +to distribute software through any other system and a licensee cannot +impose that choice. + +This section is intended to make thoroughly clear what is believed to +be a consequence of the rest of this License. + + 12. If the distribution and/or use of the Library is restricted in +certain countries either by patents or by copyrighted interfaces, the +original copyright holder who places the Library under this License +may add an explicit geographical distribution limitation excluding those +countries, so that distribution is permitted only in or among +countries not thus excluded. In such case, this License incorporates +the limitation as if written in the body of this License. + + 13. The Free Software Foundation may publish revised and/or new +versions of the Lesser General Public License from time to time. +Such new versions will be similar in spirit to the present version, +but may differ in detail to address new problems or concerns. + +Each version is given a distinguishing version number. If the Library +specifies a version number of this License which applies to it and +"any later version", you have the option of following the terms and +conditions either of that version or of any later version published by +the Free Software Foundation. If the Library does not specify a +license version number, you may choose any version ever published by +the Free Software Foundation. + + 14. If you wish to incorporate parts of the Library into other free +programs whose distribution conditions are incompatible with these, +write to the author to ask for permission. For software which is +copyrighted by the Free Software Foundation, write to the Free +Software Foundation; we sometimes make exceptions for this. Our +decision will be guided by the two goals of preserving the free status +of all derivatives of our free software and of promoting the sharing +and reuse of software generally. + + NO WARRANTY + + 15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO +WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. +EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR +OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY +KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE +LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME +THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN +WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY +AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU +FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR +CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE +LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING +RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A +FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF +SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH +DAMAGES. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Libraries + + If you develop a new library, and you want it to be of the greatest +possible use to the public, we recommend making it free software that +everyone can redistribute and change. You can do so by permitting +redistribution under these terms (or, alternatively, under the terms +of the ordinary General Public License). + + To apply these terms, attach the following notices to the library. +It is safest to attach them to the start of each source file to most +effectively convey the exclusion of warranty; and each file should +have at least the "copyright" line and a pointer to where the full +notice is found. + + + <one line to give the library's name and a brief idea of what it does.> + Copyright (C) <year> <name of author> + + This library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + This library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with this library; if not, write to the Free Software + Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + +Also add information on how to contact you by electronic and paper mail. + +You should also get your employer (if you work as a programmer) or +your school, if any, to sign a "copyright disclaimer" for the library, +if necessary. Here is a sample; alter the names: + + Yoyodyne, Inc., hereby disclaims all copyright interest in the + library `Frob' (a library for tweaking knobs) written by James + Random Hacker. + + <signature of Ty Coon>, 1 April 1990 + Ty Coon, President of Vice + +That's all there is to it! + +
diff --git a/aarch64-linux-android-4.9/COPYING.RUNTIME b/aarch64-linux-android-4.9/COPYING.RUNTIME new file mode 100644 index 0000000..e1b3c69 --- /dev/null +++ b/aarch64-linux-android-4.9/COPYING.RUNTIME
@@ -0,0 +1,73 @@ +GCC RUNTIME LIBRARY EXCEPTION + +Version 3.1, 31 March 2009 + +Copyright (C) 2009 Free Software Foundation, Inc. <http://fsf.org/> + +Everyone is permitted to copy and distribute verbatim copies of this +license document, but changing it is not allowed. + +This GCC Runtime Library Exception ("Exception") is an additional +permission under section 7 of the GNU General Public License, version +3 ("GPLv3"). It applies to a given file (the "Runtime Library") that +bears a notice placed by the copyright holder of the file stating that +the file is governed by GPLv3 along with this Exception. + +When you use GCC to compile a program, GCC may combine portions of +certain GCC header files and runtime libraries with the compiled +program. The purpose of this Exception is to allow compilation of +non-GPL (including proprietary) programs to use, in this way, the +header files and runtime libraries covered by this Exception. + +0. Definitions. + +A file is an "Independent Module" if it either requires the Runtime +Library for execution after a Compilation Process, or makes use of an +interface provided by the Runtime Library, but is not otherwise based +on the Runtime Library. + +"GCC" means a version of the GNU Compiler Collection, with or without +modifications, governed by version 3 (or a specified later version) of +the GNU General Public License (GPL) with the option of using any +subsequent versions published by the FSF. + +"GPL-compatible Software" is software whose conditions of propagation, +modification and use would permit combination with GCC in accord with +the license of GCC. + +"Target Code" refers to output from any compiler for a real or virtual +target processor architecture, in executable form or suitable for +input to an assembler, loader, linker and/or execution +phase. Notwithstanding that, Target Code does not include data in any +format that is used as a compiler intermediate representation, or used +for producing a compiler intermediate representation. + +The "Compilation Process" transforms code entirely represented in +non-intermediate languages designed for human-written code, and/or in +Java Virtual Machine byte code, into Target Code. Thus, for example, +use of source code generators and preprocessors need not be considered +part of the Compilation Process, since the Compilation Process can be +understood as starting with the output of the generators or +preprocessors. + +A Compilation Process is "Eligible" if it is done using GCC, alone or +with other GPL-compatible software, or if it is done without using any +work based on GCC. For example, using non-GPL-compatible Software to +optimize any GCC intermediate representations would not qualify as an +Eligible Compilation Process. + +1. Grant of Additional Permission. + +You have permission to propagate a work of Target Code formed by +combining the Runtime Library with Independent Modules, even if such +propagation would otherwise violate the terms of GPLv3, provided that +all Target Code was generated by Eligible Compilation Processes. You +may then convey such a combination under terms of your choice, +consistent with the licensing of the Independent Modules. + +2. No Weakening of GCC Copyleft. + +The availability of this Exception does not imply any general +presumption that third-party software is unaffected by the copyleft +requirements of the license of GCC. +
diff --git a/aarch64-linux-android-4.9/COPYING3 b/aarch64-linux-android-4.9/COPYING3 new file mode 100644 index 0000000..94a9ed0 --- /dev/null +++ b/aarch64-linux-android-4.9/COPYING3
@@ -0,0 +1,674 @@ + GNU GENERAL PUBLIC LICENSE + Version 3, 29 June 2007 + + Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/> + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The GNU General Public License is a free, copyleft license for +software and other kinds of works. + + The licenses for most software and other practical works are designed +to take away your freedom to share and change the works. By contrast, +the GNU General Public License is intended to guarantee your freedom to +share and change all versions of a program--to make sure it remains free +software for all its users. We, the Free Software Foundation, use the +GNU General Public License for most of our software; it applies also to +any other work released this way by its authors. You can apply it to +your programs, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +them if you wish), that you receive source code or can get it if you +want it, that you can change the software or use pieces of it in new +free programs, and that you know you can do these things. + + To protect your rights, we need to prevent others from denying you +these rights or asking you to surrender the rights. Therefore, you have +certain responsibilities if you distribute copies of the software, or if +you modify it: responsibilities to respect the freedom of others. + + For example, if you distribute copies of such a program, whether +gratis or for a fee, you must pass on to the recipients the same +freedoms that you received. You must make sure that they, too, receive +or can get the source code. And you must show them these terms so they +know their rights. + + Developers that use the GNU GPL protect your rights with two steps: +(1) assert copyright on the software, and (2) offer you this License +giving you legal permission to copy, distribute and/or modify it. + + For the developers' and authors' protection, the GPL clearly explains +that there is no warranty for this free software. For both users' and +authors' sake, the GPL requires that modified versions be marked as +changed, so that their problems will not be attributed erroneously to +authors of previous versions. + + Some devices are designed to deny users access to install or run +modified versions of the software inside them, although the manufacturer +can do so. This is fundamentally incompatible with the aim of +protecting users' freedom to change the software. The systematic +pattern of such abuse occurs in the area of products for individuals to +use, which is precisely where it is most unacceptable. Therefore, we +have designed this version of the GPL to prohibit the practice for those +products. If such problems arise substantially in other domains, we +stand ready to extend this provision to those domains in future versions +of the GPL, as needed to protect the freedom of users. + + Finally, every program is threatened constantly by software patents. +States should not allow patents to restrict development and use of +software on general-purpose computers, but in those that do, we wish to +avoid the special danger that patents applied to a free program could +make it effectively proprietary. To prevent this, the GPL assures that +patents cannot be used to render the program non-free. + + The precise terms and conditions for copying, distribution and +modification follow. + + TERMS AND CONDITIONS + + 0. Definitions. + + "This License" refers to version 3 of the GNU General Public License. + + "Copyright" also means copyright-like laws that apply to other kinds of +works, such as semiconductor masks. + + "The Program" refers to any copyrightable work licensed under this +License. Each licensee is addressed as "you". "Licensees" and +"recipients" may be individuals or organizations. + + To "modify" a work means to copy from or adapt all or part of the work +in a fashion requiring copyright permission, other than the making of an +exact copy. The resulting work is called a "modified version" of the +earlier work or a work "based on" the earlier work. + + A "covered work" means either the unmodified Program or a work based +on the Program. + + To "propagate" a work means to do anything with it that, without +permission, would make you directly or secondarily liable for +infringement under applicable copyright law, except executing it on a +computer or modifying a private copy. Propagation includes copying, +distribution (with or without modification), making available to the +public, and in some countries other activities as well. + + To "convey" a work means any kind of propagation that enables other +parties to make or receive copies. Mere interaction with a user through +a computer network, with no transfer of a copy, is not conveying. + + An interactive user interface displays "Appropriate Legal Notices" +to the extent that it includes a convenient and prominently visible +feature that (1) displays an appropriate copyright notice, and (2) +tells the user that there is no warranty for the work (except to the +extent that warranties are provided), that licensees may convey the +work under this License, and how to view a copy of this License. If +the interface presents a list of user commands or options, such as a +menu, a prominent item in the list meets this criterion. + + 1. Source Code. + + The "source code" for a work means the preferred form of the work +for making modifications to it. "Object code" means any non-source +form of a work. + + A "Standard Interface" means an interface that either is an official +standard defined by a recognized standards body, or, in the case of +interfaces specified for a particular programming language, one that +is widely used among developers working in that language. + + The "System Libraries" of an executable work include anything, other +than the work as a whole, that (a) is included in the normal form of +packaging a Major Component, but which is not part of that Major +Component, and (b) serves only to enable use of the work with that +Major Component, or to implement a Standard Interface for which an +implementation is available to the public in source code form. A +"Major Component", in this context, means a major essential component +(kernel, window system, and so on) of the specific operating system +(if any) on which the executable work runs, or a compiler used to +produce the work, or an object code interpreter used to run it. + + The "Corresponding Source" for a work in object code form means all +the source code needed to generate, install, and (for an executable +work) run the object code and to modify the work, including scripts to +control those activities. However, it does not include the work's +System Libraries, or general-purpose tools or generally available free +programs which are used unmodified in performing those activities but +which are not part of the work. For example, Corresponding Source +includes interface definition files associated with source files for +the work, and the source code for shared libraries and dynamically +linked subprograms that the work is specifically designed to require, +such as by intimate data communication or control flow between those +subprograms and other parts of the work. + + The Corresponding Source need not include anything that users +can regenerate automatically from other parts of the Corresponding +Source. + + The Corresponding Source for a work in source code form is that +same work. + + 2. Basic Permissions. + + All rights granted under this License are granted for the term of +copyright on the Program, and are irrevocable provided the stated +conditions are met. This License explicitly affirms your unlimited +permission to run the unmodified Program. The output from running a +covered work is covered by this License only if the output, given its +content, constitutes a covered work. This License acknowledges your +rights of fair use or other equivalent, as provided by copyright law. + + You may make, run and propagate covered works that you do not +convey, without conditions so long as your license otherwise remains +in force. You may convey covered works to others for the sole purpose +of having them make modifications exclusively for you, or provide you +with facilities for running those works, provided that you comply with +the terms of this License in conveying all material for which you do +not control copyright. Those thus making or running the covered works +for you must do so exclusively on your behalf, under your direction +and control, on terms that prohibit them from making any copies of +your copyrighted material outside their relationship with you. + + Conveying under any other circumstances is permitted solely under +the conditions stated below. Sublicensing is not allowed; section 10 +makes it unnecessary. + + 3. Protecting Users' Legal Rights From Anti-Circumvention Law. + + No covered work shall be deemed part of an effective technological +measure under any applicable law fulfilling obligations under article +11 of the WIPO copyright treaty adopted on 20 December 1996, or +similar laws prohibiting or restricting circumvention of such +measures. + + When you convey a covered work, you waive any legal power to forbid +circumvention of technological measures to the extent such circumvention +is effected by exercising rights under this License with respect to +the covered work, and you disclaim any intention to limit operation or +modification of the work as a means of enforcing, against the work's +users, your or third parties' legal rights to forbid circumvention of +technological measures. + + 4. Conveying Verbatim Copies. + + You may convey verbatim copies of the Program's source code as you +receive it, in any medium, provided that you conspicuously and +appropriately publish on each copy an appropriate copyright notice; +keep intact all notices stating that this License and any +non-permissive terms added in accord with section 7 apply to the code; +keep intact all notices of the absence of any warranty; and give all +recipients a copy of this License along with the Program. + + You may charge any price or no price for each copy that you convey, +and you may offer support or warranty protection for a fee. + + 5. Conveying Modified Source Versions. + + You may convey a work based on the Program, or the modifications to +produce it from the Program, in the form of source code under the +terms of section 4, provided that you also meet all of these conditions: + + a) The work must carry prominent notices stating that you modified + it, and giving a relevant date. + + b) The work must carry prominent notices stating that it is + released under this License and any conditions added under section + 7. This requirement modifies the requirement in section 4 to + "keep intact all notices". + + c) You must license the entire work, as a whole, under this + License to anyone who comes into possession of a copy. This + License will therefore apply, along with any applicable section 7 + additional terms, to the whole of the work, and all its parts, + regardless of how they are packaged. This License gives no + permission to license the work in any other way, but it does not + invalidate such permission if you have separately received it. + + d) If the work has interactive user interfaces, each must display + Appropriate Legal Notices; however, if the Program has interactive + interfaces that do not display Appropriate Legal Notices, your + work need not make them do so. + + A compilation of a covered work with other separate and independent +works, which are not by their nature extensions of the covered work, +and which are not combined with it such as to form a larger program, +in or on a volume of a storage or distribution medium, is called an +"aggregate" if the compilation and its resulting copyright are not +used to limit the access or legal rights of the compilation's users +beyond what the individual works permit. Inclusion of a covered work +in an aggregate does not cause this License to apply to the other +parts of the aggregate. + + 6. Conveying Non-Source Forms. + + You may convey a covered work in object code form under the terms +of sections 4 and 5, provided that you also convey the +machine-readable Corresponding Source under the terms of this License, +in one of these ways: + + a) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by the + Corresponding Source fixed on a durable physical medium + customarily used for software interchange. + + b) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by a + written offer, valid for at least three years and valid for as + long as you offer spare parts or customer support for that product + model, to give anyone who possesses the object code either (1) a + copy of the Corresponding Source for all the software in the + product that is covered by this License, on a durable physical + medium customarily used for software interchange, for a price no + more than your reasonable cost of physically performing this + conveying of source, or (2) access to copy the + Corresponding Source from a network server at no charge. + + c) Convey individual copies of the object code with a copy of the + written offer to provide the Corresponding Source. This + alternative is allowed only occasionally and noncommercially, and + only if you received the object code with such an offer, in accord + with subsection 6b. + + d) Convey the object code by offering access from a designated + place (gratis or for a charge), and offer equivalent access to the + Corresponding Source in the same way through the same place at no + further charge. You need not require recipients to copy the + Corresponding Source along with the object code. If the place to + copy the object code is a network server, the Corresponding Source + may be on a different server (operated by you or a third party) + that supports equivalent copying facilities, provided you maintain + clear directions next to the object code saying where to find the + Corresponding Source. Regardless of what server hosts the + Corresponding Source, you remain obligated to ensure that it is + available for as long as needed to satisfy these requirements. + + e) Convey the object code using peer-to-peer transmission, provided + you inform other peers where the object code and Corresponding + Source of the work are being offered to the general public at no + charge under subsection 6d. + + A separable portion of the object code, whose source code is excluded +from the Corresponding Source as a System Library, need not be +included in conveying the object code work. + + A "User Product" is either (1) a "consumer product", which means any +tangible personal property which is normally used for personal, family, +or household purposes, or (2) anything designed or sold for incorporation +into a dwelling. In determining whether a product is a consumer product, +doubtful cases shall be resolved in favor of coverage. For a particular +product received by a particular user, "normally used" refers to a +typical or common use of that class of product, regardless of the status +of the particular user or of the way in which the particular user +actually uses, or expects or is expected to use, the product. A product +is a consumer product regardless of whether the product has substantial +commercial, industrial or non-consumer uses, unless such uses represent +the only significant mode of use of the product. + + "Installation Information" for a User Product means any methods, +procedures, authorization keys, or other information required to install +and execute modified versions of a covered work in that User Product from +a modified version of its Corresponding Source. The information must +suffice to ensure that the continued functioning of the modified object +code is in no case prevented or interfered with solely because +modification has been made. + + If you convey an object code work under this section in, or with, or +specifically for use in, a User Product, and the conveying occurs as +part of a transaction in which the right of possession and use of the +User Product is transferred to the recipient in perpetuity or for a +fixed term (regardless of how the transaction is characterized), the +Corresponding Source conveyed under this section must be accompanied +by the Installation Information. But this requirement does not apply +if neither you nor any third party retains the ability to install +modified object code on the User Product (for example, the work has +been installed in ROM). + + The requirement to provide Installation Information does not include a +requirement to continue to provide support service, warranty, or updates +for a work that has been modified or installed by the recipient, or for +the User Product in which it has been modified or installed. Access to a +network may be denied when the modification itself materially and +adversely affects the operation of the network or violates the rules and +protocols for communication across the network. + + Corresponding Source conveyed, and Installation Information provided, +in accord with this section must be in a format that is publicly +documented (and with an implementation available to the public in +source code form), and must require no special password or key for +unpacking, reading or copying. + + 7. Additional Terms. + + "Additional permissions" are terms that supplement the terms of this +License by making exceptions from one or more of its conditions. +Additional permissions that are applicable to the entire Program shall +be treated as though they were included in this License, to the extent +that they are valid under applicable law. If additional permissions +apply only to part of the Program, that part may be used separately +under those permissions, but the entire Program remains governed by +this License without regard to the additional permissions. + + When you convey a copy of a covered work, you may at your option +remove any additional permissions from that copy, or from any part of +it. (Additional permissions may be written to require their own +removal in certain cases when you modify the work.) You may place +additional permissions on material, added by you to a covered work, +for which you have or can give appropriate copyright permission. + + Notwithstanding any other provision of this License, for material you +add to a covered work, you may (if authorized by the copyright holders of +that material) supplement the terms of this License with terms: + + a) Disclaiming warranty or limiting liability differently from the + terms of sections 15 and 16 of this License; or + + b) Requiring preservation of specified reasonable legal notices or + author attributions in that material or in the Appropriate Legal + Notices displayed by works containing it; or + + c) Prohibiting misrepresentation of the origin of that material, or + requiring that modified versions of such material be marked in + reasonable ways as different from the original version; or + + d) Limiting the use for publicity purposes of names of licensors or + authors of the material; or + + e) Declining to grant rights under trademark law for use of some + trade names, trademarks, or service marks; or + + f) Requiring indemnification of licensors and authors of that + material by anyone who conveys the material (or modified versions of + it) with contractual assumptions of liability to the recipient, for + any liability that these contractual assumptions directly impose on + those licensors and authors. + + All other non-permissive additional terms are considered "further +restrictions" within the meaning of section 10. If the Program as you +received it, or any part of it, contains a notice stating that it is +governed by this License along with a term that is a further +restriction, you may remove that term. If a license document contains +a further restriction but permits relicensing or conveying under this +License, you may add to a covered work material governed by the terms +of that license document, provided that the further restriction does +not survive such relicensing or conveying. + + If you add terms to a covered work in accord with this section, you +must place, in the relevant source files, a statement of the +additional terms that apply to those files, or a notice indicating +where to find the applicable terms. + + Additional terms, permissive or non-permissive, may be stated in the +form of a separately written license, or stated as exceptions; +the above requirements apply either way. + + 8. Termination. + + You may not propagate or modify a covered work except as expressly +provided under this License. Any attempt otherwise to propagate or +modify it is void, and will automatically terminate your rights under +this License (including any patent licenses granted under the third +paragraph of section 11). + + However, if you cease all violation of this License, then your +license from a particular copyright holder is reinstated (a) +provisionally, unless and until the copyright holder explicitly and +finally terminates your license, and (b) permanently, if the copyright +holder fails to notify you of the violation by some reasonable means +prior to 60 days after the cessation. + + Moreover, your license from a particular copyright holder is +reinstated permanently if the copyright holder notifies you of the +violation by some reasonable means, this is the first time you have +received notice of violation of this License (for any work) from that +copyright holder, and you cure the violation prior to 30 days after +your receipt of the notice. + + Termination of your rights under this section does not terminate the +licenses of parties who have received copies or rights from you under +this License. If your rights have been terminated and not permanently +reinstated, you do not qualify to receive new licenses for the same +material under section 10. + + 9. Acceptance Not Required for Having Copies. + + You are not required to accept this License in order to receive or +run a copy of the Program. Ancillary propagation of a covered work +occurring solely as a consequence of using peer-to-peer transmission +to receive a copy likewise does not require acceptance. However, +nothing other than this License grants you permission to propagate or +modify any covered work. These actions infringe copyright if you do +not accept this License. Therefore, by modifying or propagating a +covered work, you indicate your acceptance of this License to do so. + + 10. Automatic Licensing of Downstream Recipients. + + Each time you convey a covered work, the recipient automatically +receives a license from the original licensors, to run, modify and +propagate that work, subject to this License. You are not responsible +for enforcing compliance by third parties with this License. + + An "entity transaction" is a transaction transferring control of an +organization, or substantially all assets of one, or subdividing an +organization, or merging organizations. If propagation of a covered +work results from an entity transaction, each party to that +transaction who receives a copy of the work also receives whatever +licenses to the work the party's predecessor in interest had or could +give under the previous paragraph, plus a right to possession of the +Corresponding Source of the work from the predecessor in interest, if +the predecessor has it or can get it with reasonable efforts. + + You may not impose any further restrictions on the exercise of the +rights granted or affirmed under this License. For example, you may +not impose a license fee, royalty, or other charge for exercise of +rights granted under this License, and you may not initiate litigation +(including a cross-claim or counterclaim in a lawsuit) alleging that +any patent claim is infringed by making, using, selling, offering for +sale, or importing the Program or any portion of it. + + 11. Patents. + + A "contributor" is a copyright holder who authorizes use under this +License of the Program or a work on which the Program is based. The +work thus licensed is called the contributor's "contributor version". + + A contributor's "essential patent claims" are all patent claims +owned or controlled by the contributor, whether already acquired or +hereafter acquired, that would be infringed by some manner, permitted +by this License, of making, using, or selling its contributor version, +but do not include claims that would be infringed only as a +consequence of further modification of the contributor version. For +purposes of this definition, "control" includes the right to grant +patent sublicenses in a manner consistent with the requirements of +this License. + + Each contributor grants you a non-exclusive, worldwide, royalty-free +patent license under the contributor's essential patent claims, to +make, use, sell, offer for sale, import and otherwise run, modify and +propagate the contents of its contributor version. + + In the following three paragraphs, a "patent license" is any express +agreement or commitment, however denominated, not to enforce a patent +(such as an express permission to practice a patent or covenant not to +sue for patent infringement). To "grant" such a patent license to a +party means to make such an agreement or commitment not to enforce a +patent against the party. + + If you convey a covered work, knowingly relying on a patent license, +and the Corresponding Source of the work is not available for anyone +to copy, free of charge and under the terms of this License, through a +publicly available network server or other readily accessible means, +then you must either (1) cause the Corresponding Source to be so +available, or (2) arrange to deprive yourself of the benefit of the +patent license for this particular work, or (3) arrange, in a manner +consistent with the requirements of this License, to extend the patent +license to downstream recipients. "Knowingly relying" means you have +actual knowledge that, but for the patent license, your conveying the +covered work in a country, or your recipient's use of the covered work +in a country, would infringe one or more identifiable patents in that +country that you have reason to believe are valid. + + If, pursuant to or in connection with a single transaction or +arrangement, you convey, or propagate by procuring conveyance of, a +covered work, and grant a patent license to some of the parties +receiving the covered work authorizing them to use, propagate, modify +or convey a specific copy of the covered work, then the patent license +you grant is automatically extended to all recipients of the covered +work and works based on it. + + A patent license is "discriminatory" if it does not include within +the scope of its coverage, prohibits the exercise of, or is +conditioned on the non-exercise of one or more of the rights that are +specifically granted under this License. You may not convey a covered +work if you are a party to an arrangement with a third party that is +in the business of distributing software, under which you make payment +to the third party based on the extent of your activity of conveying +the work, and under which the third party grants, to any of the +parties who would receive the covered work from you, a discriminatory +patent license (a) in connection with copies of the covered work +conveyed by you (or copies made from those copies), or (b) primarily +for and in connection with specific products or compilations that +contain the covered work, unless you entered into that arrangement, +or that patent license was granted, prior to 28 March 2007. + + Nothing in this License shall be construed as excluding or limiting +any implied license or other defenses to infringement that may +otherwise be available to you under applicable patent law. + + 12. No Surrender of Others' Freedom. + + If conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot convey a +covered work so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you may +not convey it at all. For example, if you agree to terms that obligate you +to collect a royalty for further conveying from those to whom you convey +the Program, the only way you could satisfy both those terms and this +License would be to refrain entirely from conveying the Program. + + 13. Use with the GNU Affero General Public License. + + Notwithstanding any other provision of this License, you have +permission to link or combine any covered work with a work licensed +under version 3 of the GNU Affero General Public License into a single +combined work, and to convey the resulting work. The terms of this +License will continue to apply to the part which is the covered work, +but the special requirements of the GNU Affero General Public License, +section 13, concerning interaction through a network will apply to the +combination as such. + + 14. Revised Versions of this License. + + The Free Software Foundation may publish revised and/or new versions of +the GNU General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + + Each version is given a distinguishing version number. If the +Program specifies that a certain numbered version of the GNU General +Public License "or any later version" applies to it, you have the +option of following the terms and conditions either of that numbered +version or of any later version published by the Free Software +Foundation. If the Program does not specify a version number of the +GNU General Public License, you may choose any version ever published +by the Free Software Foundation. + + If the Program specifies that a proxy can decide which future +versions of the GNU General Public License can be used, that proxy's +public statement of acceptance of a version permanently authorizes you +to choose that version for the Program. + + Later license versions may give you additional or different +permissions. However, no additional obligations are imposed on any +author or copyright holder as a result of your choosing to follow a +later version. + + 15. Disclaimer of Warranty. + + THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY +APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT +HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY +OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, +THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM +IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF +ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. Limitation of Liability. + + IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS +THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY +GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE +USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF +DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD +PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), +EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF +SUCH DAMAGES. + + 17. Interpretation of Sections 15 and 16. + + If the disclaimer of warranty and limitation of liability provided +above cannot be given local legal effect according to their terms, +reviewing courts shall apply local law that most closely approximates +an absolute waiver of all civil liability in connection with the +Program, unless a warranty or assumption of liability accompanies a +copy of the Program in return for a fee. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +state the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + <one line to give the program's name and a brief idea of what it does.> + Copyright (C) <year> <name of author> + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see <http://www.gnu.org/licenses/>. + +Also add information on how to contact you by electronic and paper mail. + + If the program does terminal interaction, make it output a short +notice like this when it starts in an interactive mode: + + <program> Copyright (C) <year> <name of author> + This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, your program's commands +might be different; for a GUI interface, you would use an "about box". + + You should also get your employer (if you work as a programmer) or school, +if any, to sign a "copyright disclaimer" for the program, if necessary. +For more information on this, and how to apply and follow the GNU GPL, see +<http://www.gnu.org/licenses/>. + + The GNU General Public License does not permit incorporating your program +into proprietary programs. If your program is a subroutine library, you +may consider it more useful to permit linking proprietary applications with +the library. If this is what you want to do, use the GNU Lesser General +Public License instead of this License. But first, please read +<http://www.gnu.org/philosophy/why-not-lgpl.html>.
diff --git a/aarch64-linux-android-4.9/COPYING3.LIB b/aarch64-linux-android-4.9/COPYING3.LIB new file mode 100644 index 0000000..fc8a5de --- /dev/null +++ b/aarch64-linux-android-4.9/COPYING3.LIB
@@ -0,0 +1,165 @@ + GNU LESSER GENERAL PUBLIC LICENSE + Version 3, 29 June 2007 + + Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/> + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + + This version of the GNU Lesser General Public License incorporates +the terms and conditions of version 3 of the GNU General Public +License, supplemented by the additional permissions listed below. + + 0. Additional Definitions. + + As used herein, "this License" refers to version 3 of the GNU Lesser +General Public License, and the "GNU GPL" refers to version 3 of the GNU +General Public License. + + "The Library" refers to a covered work governed by this License, +other than an Application or a Combined Work as defined below. + + An "Application" is any work that makes use of an interface provided +by the Library, but which is not otherwise based on the Library. +Defining a subclass of a class defined by the Library is deemed a mode +of using an interface provided by the Library. + + A "Combined Work" is a work produced by combining or linking an +Application with the Library. The particular version of the Library +with which the Combined Work was made is also called the "Linked +Version". + + The "Minimal Corresponding Source" for a Combined Work means the +Corresponding Source for the Combined Work, excluding any source code +for portions of the Combined Work that, considered in isolation, are +based on the Application, and not on the Linked Version. + + The "Corresponding Application Code" for a Combined Work means the +object code and/or source code for the Application, including any data +and utility programs needed for reproducing the Combined Work from the +Application, but excluding the System Libraries of the Combined Work. + + 1. Exception to Section 3 of the GNU GPL. + + You may convey a covered work under sections 3 and 4 of this License +without being bound by section 3 of the GNU GPL. + + 2. Conveying Modified Versions. + + If you modify a copy of the Library, and, in your modifications, a +facility refers to a function or data to be supplied by an Application +that uses the facility (other than as an argument passed when the +facility is invoked), then you may convey a copy of the modified +version: + + a) under this License, provided that you make a good faith effort to + ensure that, in the event an Application does not supply the + function or data, the facility still operates, and performs + whatever part of its purpose remains meaningful, or + + b) under the GNU GPL, with none of the additional permissions of + this License applicable to that copy. + + 3. Object Code Incorporating Material from Library Header Files. + + The object code form of an Application may incorporate material from +a header file that is part of the Library. You may convey such object +code under terms of your choice, provided that, if the incorporated +material is not limited to numerical parameters, data structure +layouts and accessors, or small macros, inline functions and templates +(ten or fewer lines in length), you do both of the following: + + a) Give prominent notice with each copy of the object code that the + Library is used in it and that the Library and its use are + covered by this License. + + b) Accompany the object code with a copy of the GNU GPL and this license + document. + + 4. Combined Works. + + You may convey a Combined Work under terms of your choice that, +taken together, effectively do not restrict modification of the +portions of the Library contained in the Combined Work and reverse +engineering for debugging such modifications, if you also do each of +the following: + + a) Give prominent notice with each copy of the Combined Work that + the Library is used in it and that the Library and its use are + covered by this License. + + b) Accompany the Combined Work with a copy of the GNU GPL and this license + document. + + c) For a Combined Work that displays copyright notices during + execution, include the copyright notice for the Library among + these notices, as well as a reference directing the user to the + copies of the GNU GPL and this license document. + + d) Do one of the following: + + 0) Convey the Minimal Corresponding Source under the terms of this + License, and the Corresponding Application Code in a form + suitable for, and under terms that permit, the user to + recombine or relink the Application with a modified version of + the Linked Version to produce a modified Combined Work, in the + manner specified by section 6 of the GNU GPL for conveying + Corresponding Source. + + 1) Use a suitable shared library mechanism for linking with the + Library. A suitable mechanism is one that (a) uses at run time + a copy of the Library already present on the user's computer + system, and (b) will operate properly with a modified version + of the Library that is interface-compatible with the Linked + Version. + + e) Provide Installation Information, but only if you would otherwise + be required to provide such information under section 6 of the + GNU GPL, and only to the extent that such information is + necessary to install and execute a modified version of the + Combined Work produced by recombining or relinking the + Application with a modified version of the Linked Version. (If + you use option 4d0, the Installation Information must accompany + the Minimal Corresponding Source and Corresponding Application + Code. If you use option 4d1, you must provide the Installation + Information in the manner specified by section 6 of the GNU GPL + for conveying Corresponding Source.) + + 5. Combined Libraries. + + You may place library facilities that are a work based on the +Library side by side in a single library together with other library +facilities that are not Applications and are not covered by this +License, and convey such a combined library under terms of your +choice, if you do both of the following: + + a) Accompany the combined library with a copy of the same work based + on the Library, uncombined with any other library facilities, + conveyed under the terms of this License. + + b) Give prominent notice with the combined library that part of it + is a work based on the Library, and explaining where to find the + accompanying uncombined form of the same work. + + 6. Revised Versions of the GNU Lesser General Public License. + + The Free Software Foundation may publish revised and/or new versions +of the GNU Lesser General Public License from time to time. Such new +versions will be similar in spirit to the present version, but may +differ in detail to address new problems or concerns. + + Each version is given a distinguishing version number. If the +Library as you received it specifies that a certain numbered version +of the GNU Lesser General Public License "or any later version" +applies to it, you have the option of following the terms and +conditions either of that published version or of any later version +published by the Free Software Foundation. If the Library as you +received it does not specify a version number of the GNU Lesser +General Public License, you may choose any version of the GNU Lesser +General Public License ever published by the Free Software Foundation. + + If the Library as you received it specifies that a proxy can decide +whether future versions of the GNU Lesser General Public License shall +apply, that proxy's public statement of acceptance of any version is +permanent authorization for you to choose that version for the +Library.
diff --git a/aarch64-linux-android-4.9/MODULE_LICENSE_GPL b/aarch64-linux-android-4.9/MODULE_LICENSE_GPL new file mode 100644 index 0000000..e69de29 --- /dev/null +++ b/aarch64-linux-android-4.9/MODULE_LICENSE_GPL
diff --git a/aarch64-linux-android-4.9/NOTICE b/aarch64-linux-android-4.9/NOTICE new file mode 100644 index 0000000..623b625 --- /dev/null +++ b/aarch64-linux-android-4.9/NOTICE
@@ -0,0 +1,340 @@ + GNU GENERAL PUBLIC LICENSE + Version 2, June 1991 + + Copyright (C) 1989, 1991 Free Software Foundation, Inc. + 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The licenses for most software are designed to take away your +freedom to share and change it. By contrast, the GNU General Public +License is intended to guarantee your freedom to share and change free +software--to make sure the software is free for all its users. This +General Public License applies to most of the Free Software +Foundation's software and to any other program whose authors commit to +using it. (Some other Free Software Foundation software is covered by +the GNU Library General Public License instead.) You can apply it to +your programs, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +this service if you wish), that you receive source code or can get it +if you want it, that you can change the software or use pieces of it +in new free programs; and that you know you can do these things. + + To protect your rights, we need to make restrictions that forbid +anyone to deny you these rights or to ask you to surrender the rights. +These restrictions translate to certain responsibilities for you if you +distribute copies of the software, or if you modify it. + + For example, if you distribute copies of such a program, whether +gratis or for a fee, you must give the recipients all the rights that +you have. You must make sure that they, too, receive or can get the +source code. And you must show them these terms so they know their +rights. + + We protect your rights with two steps: (1) copyright the software, and +(2) offer you this license which gives you legal permission to copy, +distribute and/or modify the software. + + Also, for each author's protection and ours, we want to make certain +that everyone understands that there is no warranty for this free +software. If the software is modified by someone else and passed on, we +want its recipients to know that what they have is not the original, so +that any problems introduced by others will not reflect on the original +authors' reputations. + + Finally, any free program is threatened constantly by software +patents. We wish to avoid the danger that redistributors of a free +program will individually obtain patent licenses, in effect making the +program proprietary. To prevent this, we have made it clear that any +patent must be licensed for everyone's free use or not licensed at all. + + The precise terms and conditions for copying, distribution and +modification follow. + + GNU GENERAL PUBLIC LICENSE + TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION + + 0. This License applies to any program or other work which contains +a notice placed by the copyright holder saying it may be distributed +under the terms of this General Public License. The "Program", below, +refers to any such program or work, and a "work based on the Program" +means either the Program or any derivative work under copyright law: +that is to say, a work containing the Program or a portion of it, +either verbatim or with modifications and/or translated into another +language. (Hereinafter, translation is included without limitation in +the term "modification".) Each licensee is addressed as "you". + +Activities other than copying, distribution and modification are not +covered by this License; they are outside its scope. The act of +running the Program is not restricted, and the output from the Program +is covered only if its contents constitute a work based on the +Program (independent of having been made by running the Program). +Whether that is true depends on what the Program does. + + 1. You may copy and distribute verbatim copies of the Program's +source code as you receive it, in any medium, provided that you +conspicuously and appropriately publish on each copy an appropriate +copyright notice and disclaimer of warranty; keep intact all the +notices that refer to this License and to the absence of any warranty; +and give any other recipients of the Program a copy of this License +along with the Program. + +You may charge a fee for the physical act of transferring a copy, and +you may at your option offer warranty protection in exchange for a fee. + + 2. You may modify your copy or copies of the Program or any portion +of it, thus forming a work based on the Program, and copy and +distribute such modifications or work under the terms of Section 1 +above, provided that you also meet all of these conditions: + + a) You must cause the modified files to carry prominent notices + stating that you changed the files and the date of any change. + + b) You must cause any work that you distribute or publish, that in + whole or in part contains or is derived from the Program or any + part thereof, to be licensed as a whole at no charge to all third + parties under the terms of this License. + + c) If the modified program normally reads commands interactively + when run, you must cause it, when started running for such + interactive use in the most ordinary way, to print or display an + announcement including an appropriate copyright notice and a + notice that there is no warranty (or else, saying that you provide + a warranty) and that users may redistribute the program under + these conditions, and telling the user how to view a copy of this + License. (Exception: if the Program itself is interactive but + does not normally print such an announcement, your work based on + the Program is not required to print an announcement.) + +These requirements apply to the modified work as a whole. If +identifiable sections of that work are not derived from the Program, +and can be reasonably considered independent and separate works in +themselves, then this License, and its terms, do not apply to those +sections when you distribute them as separate works. But when you +distribute the same sections as part of a whole which is a work based +on the Program, the distribution of the whole must be on the terms of +this License, whose permissions for other licensees extend to the +entire whole, and thus to each and every part regardless of who wrote it. + +Thus, it is not the intent of this section to claim rights or contest +your rights to work written entirely by you; rather, the intent is to +exercise the right to control the distribution of derivative or +collective works based on the Program. + +In addition, mere aggregation of another work not based on the Program +with the Program (or with a work based on the Program) on a volume of +a storage or distribution medium does not bring the other work under +the scope of this License. + + 3. You may copy and distribute the Program (or a work based on it, +under Section 2) in object code or executable form under the terms of +Sections 1 and 2 above provided that you also do one of the following: + + a) Accompany it with the complete corresponding machine-readable + source code, which must be distributed under the terms of Sections + 1 and 2 above on a medium customarily used for software interchange; or, + + b) Accompany it with a written offer, valid for at least three + years, to give any third party, for a charge no more than your + cost of physically performing source distribution, a complete + machine-readable copy of the corresponding source code, to be + distributed under the terms of Sections 1 and 2 above on a medium + customarily used for software interchange; or, + + c) Accompany it with the information you received as to the offer + to distribute corresponding source code. (This alternative is + allowed only for noncommercial distribution and only if you + received the program in object code or executable form with such + an offer, in accord with Subsection b above.) + +The source code for a work means the preferred form of the work for +making modifications to it. For an executable work, complete source +code means all the source code for all modules it contains, plus any +associated interface definition files, plus the scripts used to +control compilation and installation of the executable. However, as a +special exception, the source code distributed need not include +anything that is normally distributed (in either source or binary +form) with the major components (compiler, kernel, and so on) of the +operating system on which the executable runs, unless that component +itself accompanies the executable. + +If distribution of executable or object code is made by offering +access to copy from a designated place, then offering equivalent +access to copy the source code from the same place counts as +distribution of the source code, even though third parties are not +compelled to copy the source along with the object code. + + 4. You may not copy, modify, sublicense, or distribute the Program +except as expressly provided under this License. Any attempt +otherwise to copy, modify, sublicense or distribute the Program is +void, and will automatically terminate your rights under this License. +However, parties who have received copies, or rights, from you under +this License will not have their licenses terminated so long as such +parties remain in full compliance. + + 5. You are not required to accept this License, since you have not +signed it. However, nothing else grants you permission to modify or +distribute the Program or its derivative works. These actions are +prohibited by law if you do not accept this License. Therefore, by +modifying or distributing the Program (or any work based on the +Program), you indicate your acceptance of this License to do so, and +all its terms and conditions for copying, distributing or modifying +the Program or works based on it. + + 6. Each time you redistribute the Program (or any work based on the +Program), the recipient automatically receives a license from the +original licensor to copy, distribute or modify the Program subject to +these terms and conditions. You may not impose any further +restrictions on the recipients' exercise of the rights granted herein. +You are not responsible for enforcing compliance by third parties to +this License. + + 7. If, as a consequence of a court judgment or allegation of patent +infringement or for any other reason (not limited to patent issues), +conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot +distribute so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you +may not distribute the Program at all. For example, if a patent +license would not permit royalty-free redistribution of the Program by +all those who receive copies directly or indirectly through you, then +the only way you could satisfy both it and this License would be to +refrain entirely from distribution of the Program. + +If any portion of this section is held invalid or unenforceable under +any particular circumstance, the balance of the section is intended to +apply and the section as a whole is intended to apply in other +circumstances. + +It is not the purpose of this section to induce you to infringe any +patents or other property right claims or to contest validity of any +such claims; this section has the sole purpose of protecting the +integrity of the free software distribution system, which is +implemented by public license practices. Many people have made +generous contributions to the wide range of software distributed +through that system in reliance on consistent application of that +system; it is up to the author/donor to decide if he or she is willing +to distribute software through any other system and a licensee cannot +impose that choice. + +This section is intended to make thoroughly clear what is believed to +be a consequence of the rest of this License. + + 8. If the distribution and/or use of the Program is restricted in +certain countries either by patents or by copyrighted interfaces, the +original copyright holder who places the Program under this License +may add an explicit geographical distribution limitation excluding +those countries, so that distribution is permitted only in or among +countries not thus excluded. In such case, this License incorporates +the limitation as if written in the body of this License. + + 9. The Free Software Foundation may publish revised and/or new versions +of the General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + +Each version is given a distinguishing version number. If the Program +specifies a version number of this License which applies to it and "any +later version", you have the option of following the terms and conditions +either of that version or of any later version published by the Free +Software Foundation. If the Program does not specify a version number of +this License, you may choose any version ever published by the Free Software +Foundation. + + 10. If you wish to incorporate parts of the Program into other free +programs whose distribution conditions are different, write to the author +to ask for permission. For software which is copyrighted by the Free +Software Foundation, write to the Free Software Foundation; we sometimes +make exceptions for this. Our decision will be guided by the two goals +of preserving the free status of all derivatives of our free software and +of promoting the sharing and reuse of software generally. + + NO WARRANTY + + 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY +FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN +OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES +PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED +OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF +MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS +TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE +PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, +REPAIR OR CORRECTION. + + 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR +REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, +INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING +OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED +TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY +YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER +PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE +POSSIBILITY OF SUCH DAMAGES. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +convey the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + <one line to give the program's name and a brief idea of what it does.> + Copyright (C) <year> <name of author> + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + + +Also add information on how to contact you by electronic and paper mail. + +If the program is interactive, make it output a short notice like this +when it starts in an interactive mode: + + Gnomovision version 69, Copyright (C) year name of author + Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, the commands you use may +be called something other than `show w' and `show c'; they could even be +mouse-clicks or menu items--whatever suits your program. + +You should also get your employer (if you work as a programmer) or your +school, if any, to sign a "copyright disclaimer" for the program, if +necessary. Here is a sample; alter the names: + + Yoyodyne, Inc., hereby disclaims all copyright interest in the program + `Gnomovision' (which makes passes at compilers) written by James Hacker. + + <signature of Ty Coon>, 1 April 1989 + Ty Coon, President of Vice + +This General Public License does not permit incorporating your program into +proprietary programs. If your program is a subroutine library, you may +consider it more useful to permit linking proprietary applications with the +library. If this is what you want to do, use the GNU Library General +Public License instead of this License.
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/bin/ar b/aarch64-linux-android-4.9/aarch64-linux-android/bin/ar new file mode 120000 index 0000000..e9e32c0 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/bin/ar
@@ -0,0 +1 @@ +../../bin/aarch64-linux-android-ar \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/bin/as b/aarch64-linux-android-4.9/aarch64-linux-android/bin/as new file mode 120000 index 0000000..4fa9c65 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/bin/as
@@ -0,0 +1 @@ +../../bin/aarch64-linux-android-as \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/bin/ld b/aarch64-linux-android-4.9/aarch64-linux-android/bin/ld new file mode 120000 index 0000000..91c0fad --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/bin/ld
@@ -0,0 +1 @@ +../../bin/aarch64-linux-android-ld \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/bin/ld.bfd b/aarch64-linux-android-4.9/aarch64-linux-android/bin/ld.bfd new file mode 120000 index 0000000..f976021 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/bin/ld.bfd
@@ -0,0 +1 @@ +../../bin/aarch64-linux-android-ld.bfd \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/bin/ld.gold b/aarch64-linux-android-4.9/aarch64-linux-android/bin/ld.gold new file mode 120000 index 0000000..f421e1b --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/bin/ld.gold
@@ -0,0 +1 @@ +../../bin/aarch64-linux-android-ld.gold \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/bin/nm b/aarch64-linux-android-4.9/aarch64-linux-android/bin/nm new file mode 120000 index 0000000..e8e10f3 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/bin/nm
@@ -0,0 +1 @@ +../../bin/aarch64-linux-android-nm \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/bin/objcopy b/aarch64-linux-android-4.9/aarch64-linux-android/bin/objcopy new file mode 120000 index 0000000..23aafa8 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/bin/objcopy
@@ -0,0 +1 @@ +../../bin/aarch64-linux-android-objcopy \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/bin/objdump b/aarch64-linux-android-4.9/aarch64-linux-android/bin/objdump new file mode 120000 index 0000000..245ece7 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/bin/objdump
@@ -0,0 +1 @@ +../../bin/aarch64-linux-android-objdump \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/bin/ranlib b/aarch64-linux-android-4.9/aarch64-linux-android/bin/ranlib new file mode 120000 index 0000000..5b57d40 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/bin/ranlib
@@ -0,0 +1 @@ +../../bin/aarch64-linux-android-ranlib \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/bin/strip b/aarch64-linux-android-4.9/aarch64-linux-android/bin/strip new file mode 120000 index 0000000..248094c --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/bin/strip
@@ -0,0 +1 @@ +../../bin/aarch64-linux-android-strip \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.x b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.x new file mode 100644 index 0000000..284d0ba --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.x
@@ -0,0 +1,220 @@ +/* Default linker script, for normal executables */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00400000); . = 0x00400000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xbn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xbn new file mode 100644 index 0000000..d859b90 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xbn
@@ -0,0 +1,219 @@ +/* Script for -N: mix text and data on same page; don't align data */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00400000); . = 0x00400000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = .; + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xc new file mode 100644 index 0000000..36f03c1 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xc
@@ -0,0 +1,221 @@ +/* Script for -z combreloc: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00400000); . = 0x00400000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xn new file mode 100644 index 0000000..6e6a528 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xn
@@ -0,0 +1,219 @@ +/* Script for -n: mix text and data on same page */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00400000); . = 0x00400000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xr b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xr new file mode 100644 index 0000000..5a559e4 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xr
@@ -0,0 +1,145 @@ +/* Script for ld -r: link without relocation */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rela.init 0 : { *(.rela.init) } + .rela.text 0 : { *(.rela.text) } + .rela.fini 0 : { *(.rela.fini) } + .rela.rodata 0 : { *(.rela.rodata) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rela.data 0 : { *(.rela.data) } + .rela.tdata 0 : { *(.rela.tdata) } + .rela.tbss 0 : { *(.rela.tbss) } + .rela.ctors 0 : { *(.rela.ctors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rela.got 0 : { *(.rela.got) } + .rela.bss 0 : { *(.rela.bss) } + .rela.ifunc 0 : { *(.rela.ifunc) } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt 0 : { *(.plt) *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got) *(.igot) } + .got.plt 0 : { *(.got.plt) *(.igot.plt) } + .data 0 : + { + *(.data) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0 : + { + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xs b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xs new file mode 100644 index 0000000..b205773 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xs
@@ -0,0 +1,210 @@ +/* Script for ld --shared: link shared library */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xsc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xsc new file mode 100644 index 0000000..b54f313 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xsc
@@ -0,0 +1,213 @@ +/* Script for --shared -z combreloc: shared library, combine & sort relocs */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xsw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xsw new file mode 100644 index 0000000..06c1f95 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xsw
@@ -0,0 +1,211 @@ +/* Script for --shared -z combreloc -z now -z relro: shared library, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xu b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xu new file mode 100644 index 0000000..55b5ca1 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xu
@@ -0,0 +1,146 @@ +/* Script for ld -Ur: link w/out relocation, do create constructors */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rela.init 0 : { *(.rela.init) } + .rela.text 0 : { *(.rela.text) } + .rela.fini 0 : { *(.rela.fini) } + .rela.rodata 0 : { *(.rela.rodata) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rela.data 0 : { *(.rela.data) } + .rela.tdata 0 : { *(.rela.tdata) } + .rela.tbss 0 : { *(.rela.tbss) } + .rela.ctors 0 : { *(.rela.ctors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rela.got 0 : { *(.rela.got) } + .rela.bss 0 : { *(.rela.bss) } + .rela.ifunc 0 : { *(.rela.ifunc) } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt 0 : { *(.plt) *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got) *(.igot) } + .got.plt 0 : { *(.got.plt) *(.igot.plt) } + .data 0 : + { + *(.data) + SORT(CONSTRUCTORS) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0 : + { + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xw new file mode 100644 index 0000000..5a74203 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf.xw
@@ -0,0 +1,220 @@ +/* Script for -z combreloc -z now -z relro: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00400000); . = 0x00400000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.x b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.x new file mode 100644 index 0000000..d7a30b0 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.x
@@ -0,0 +1,220 @@ +/* Default linker script, for normal executables */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00400000); . = 0x00400000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xbn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xbn new file mode 100644 index 0000000..d9c37c8 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xbn
@@ -0,0 +1,219 @@ +/* Script for -N: mix text and data on same page; don't align data */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00400000); . = 0x00400000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = .; + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xc new file mode 100644 index 0000000..91ad870 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xc
@@ -0,0 +1,221 @@ +/* Script for -z combreloc: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00400000); . = 0x00400000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xn new file mode 100644 index 0000000..4fe730d --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xn
@@ -0,0 +1,219 @@ +/* Script for -n: mix text and data on same page */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00400000); . = 0x00400000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xr b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xr new file mode 100644 index 0000000..dd18907 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xr
@@ -0,0 +1,145 @@ +/* Script for ld -r: link without relocation */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rela.init 0 : { *(.rela.init) } + .rela.text 0 : { *(.rela.text) } + .rela.fini 0 : { *(.rela.fini) } + .rela.rodata 0 : { *(.rela.rodata) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rela.data 0 : { *(.rela.data) } + .rela.tdata 0 : { *(.rela.tdata) } + .rela.tbss 0 : { *(.rela.tbss) } + .rela.ctors 0 : { *(.rela.ctors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rela.got 0 : { *(.rela.got) } + .rela.bss 0 : { *(.rela.bss) } + .rela.ifunc 0 : { *(.rela.ifunc) } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt 0 : { *(.plt) *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got) *(.igot) } + .got.plt 0 : { *(.got.plt) *(.igot.plt) } + .data 0 : + { + *(.data) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0 : + { + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xs b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xs new file mode 100644 index 0000000..da34dac --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xs
@@ -0,0 +1,210 @@ +/* Script for ld --shared: link shared library */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xsc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xsc new file mode 100644 index 0000000..270616e --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xsc
@@ -0,0 +1,213 @@ +/* Script for --shared -z combreloc: shared library, combine & sort relocs */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xsw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xsw new file mode 100644 index 0000000..6d54d91 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xsw
@@ -0,0 +1,211 @@ +/* Script for --shared -z combreloc -z now -z relro: shared library, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xu b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xu new file mode 100644 index 0000000..b8d1754 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xu
@@ -0,0 +1,146 @@ +/* Script for ld -Ur: link w/out relocation, do create constructors */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rela.init 0 : { *(.rela.init) } + .rela.text 0 : { *(.rela.text) } + .rela.fini 0 : { *(.rela.fini) } + .rela.rodata 0 : { *(.rela.rodata) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rela.data 0 : { *(.rela.data) } + .rela.tdata 0 : { *(.rela.tdata) } + .rela.tbss 0 : { *(.rela.tbss) } + .rela.ctors 0 : { *(.rela.ctors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rela.got 0 : { *(.rela.got) } + .rela.bss 0 : { *(.rela.bss) } + .rela.ifunc 0 : { *(.rela.ifunc) } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt 0 : { *(.plt) *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got) *(.igot) } + .got.plt 0 : { *(.got.plt) *(.igot.plt) } + .data 0 : + { + *(.data) + SORT(CONSTRUCTORS) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0 : + { + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xw new file mode 100644 index 0000000..1582ee1 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32.xw
@@ -0,0 +1,220 @@ +/* Script for -z combreloc -z now -z relro: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00400000); . = 0x00400000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.x b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.x new file mode 100644 index 0000000..4ca3c01 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.x
@@ -0,0 +1,220 @@ +/* Default linker script, for normal executables */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00400000); . = 0x00400000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xbn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xbn new file mode 100644 index 0000000..59842c6 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xbn
@@ -0,0 +1,219 @@ +/* Script for -N: mix text and data on same page; don't align data */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00400000); . = 0x00400000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = .; + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xc new file mode 100644 index 0000000..8b2ba31 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xc
@@ -0,0 +1,221 @@ +/* Script for -z combreloc: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00400000); . = 0x00400000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xn new file mode 100644 index 0000000..980c070 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xn
@@ -0,0 +1,219 @@ +/* Script for -n: mix text and data on same page */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00400000); . = 0x00400000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xr b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xr new file mode 100644 index 0000000..3a9bbb7 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xr
@@ -0,0 +1,145 @@ +/* Script for ld -r: link without relocation */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rela.init 0 : { *(.rela.init) } + .rela.text 0 : { *(.rela.text) } + .rela.fini 0 : { *(.rela.fini) } + .rela.rodata 0 : { *(.rela.rodata) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rela.data 0 : { *(.rela.data) } + .rela.tdata 0 : { *(.rela.tdata) } + .rela.tbss 0 : { *(.rela.tbss) } + .rela.ctors 0 : { *(.rela.ctors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rela.got 0 : { *(.rela.got) } + .rela.bss 0 : { *(.rela.bss) } + .rela.ifunc 0 : { *(.rela.ifunc) } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt 0 : { *(.plt) *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got) *(.igot) } + .got.plt 0 : { *(.got.plt) *(.igot.plt) } + .data 0 : + { + *(.data) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0 : + { + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xs b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xs new file mode 100644 index 0000000..b568f69 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xs
@@ -0,0 +1,210 @@ +/* Script for ld --shared: link shared library */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xsc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xsc new file mode 100644 index 0000000..c9819eb --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xsc
@@ -0,0 +1,213 @@ +/* Script for --shared -z combreloc: shared library, combine & sort relocs */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xsw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xsw new file mode 100644 index 0000000..a6c25cb --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xsw
@@ -0,0 +1,211 @@ +/* Script for --shared -z combreloc -z now -z relro: shared library, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xu b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xu new file mode 100644 index 0000000..dd9747c --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xu
@@ -0,0 +1,146 @@ +/* Script for ld -Ur: link w/out relocation, do create constructors */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rela.init 0 : { *(.rela.init) } + .rela.text 0 : { *(.rela.text) } + .rela.fini 0 : { *(.rela.fini) } + .rela.rodata 0 : { *(.rela.rodata) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rela.data 0 : { *(.rela.data) } + .rela.tdata 0 : { *(.rela.tdata) } + .rela.tbss 0 : { *(.rela.tbss) } + .rela.ctors 0 : { *(.rela.ctors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rela.got 0 : { *(.rela.got) } + .rela.bss 0 : { *(.rela.bss) } + .rela.ifunc 0 : { *(.rela.ifunc) } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt 0 : { *(.plt) *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got) *(.igot) } + .got.plt 0 : { *(.got.plt) *(.igot.plt) } + .data 0 : + { + *(.data) + SORT(CONSTRUCTORS) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0 : + { + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xw new file mode 100644 index 0000000..cd6a0ce --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elf32b.xw
@@ -0,0 +1,220 @@ +/* Script for -z combreloc -z now -z relro: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00400000); . = 0x00400000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.x b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.x new file mode 100644 index 0000000..57a72c9 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.x
@@ -0,0 +1,220 @@ +/* Default linker script, for normal executables */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00400000); . = 0x00400000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xbn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xbn new file mode 100644 index 0000000..d0828d6 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xbn
@@ -0,0 +1,219 @@ +/* Script for -N: mix text and data on same page; don't align data */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00400000); . = 0x00400000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = .; + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xc new file mode 100644 index 0000000..6574a43 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xc
@@ -0,0 +1,221 @@ +/* Script for -z combreloc: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00400000); . = 0x00400000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xn new file mode 100644 index 0000000..2179e65 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xn
@@ -0,0 +1,219 @@ +/* Script for -n: mix text and data on same page */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00400000); . = 0x00400000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xr b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xr new file mode 100644 index 0000000..b085d7e --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xr
@@ -0,0 +1,145 @@ +/* Script for ld -r: link without relocation */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rela.init 0 : { *(.rela.init) } + .rela.text 0 : { *(.rela.text) } + .rela.fini 0 : { *(.rela.fini) } + .rela.rodata 0 : { *(.rela.rodata) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rela.data 0 : { *(.rela.data) } + .rela.tdata 0 : { *(.rela.tdata) } + .rela.tbss 0 : { *(.rela.tbss) } + .rela.ctors 0 : { *(.rela.ctors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rela.got 0 : { *(.rela.got) } + .rela.bss 0 : { *(.rela.bss) } + .rela.ifunc 0 : { *(.rela.ifunc) } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt 0 : { *(.plt) *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got) *(.igot) } + .got.plt 0 : { *(.got.plt) *(.igot.plt) } + .data 0 : + { + *(.data) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0 : + { + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xs b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xs new file mode 100644 index 0000000..a02fba4 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xs
@@ -0,0 +1,210 @@ +/* Script for ld --shared: link shared library */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xsc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xsc new file mode 100644 index 0000000..9505798 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xsc
@@ -0,0 +1,213 @@ +/* Script for --shared -z combreloc: shared library, combine & sort relocs */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xsw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xsw new file mode 100644 index 0000000..112f9fb --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xsw
@@ -0,0 +1,211 @@ +/* Script for --shared -z combreloc -z now -z relro: shared library, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xu b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xu new file mode 100644 index 0000000..686b2cc --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xu
@@ -0,0 +1,146 @@ +/* Script for ld -Ur: link w/out relocation, do create constructors */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rela.init 0 : { *(.rela.init) } + .rela.text 0 : { *(.rela.text) } + .rela.fini 0 : { *(.rela.fini) } + .rela.rodata 0 : { *(.rela.rodata) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rela.data 0 : { *(.rela.data) } + .rela.tdata 0 : { *(.rela.tdata) } + .rela.tbss 0 : { *(.rela.tbss) } + .rela.ctors 0 : { *(.rela.ctors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rela.got 0 : { *(.rela.got) } + .rela.bss 0 : { *(.rela.bss) } + .rela.ifunc 0 : { *(.rela.ifunc) } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt 0 : { *(.plt) *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got) *(.igot) } + .got.plt 0 : { *(.got.plt) *(.igot.plt) } + .data 0 : + { + *(.data) + SORT(CONSTRUCTORS) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0 : + { + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xw new file mode 100644 index 0000000..f1a21a9 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64elfb.xw
@@ -0,0 +1,220 @@ +/* Script for -z combreloc -z now -z relro: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00400000); . = 0x00400000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.x b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.x new file mode 100644 index 0000000..0458a59 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.x
@@ -0,0 +1,217 @@ +/* Default linker script, for normal executables */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x400000); . = 0x400000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (24, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xbn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xbn new file mode 100644 index 0000000..b1b4d18 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xbn
@@ -0,0 +1,214 @@ +/* Script for -N: mix text and data on same page; don't align data */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x400000); . = 0x400000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = .; + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xc new file mode 100644 index 0000000..71efb5c --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xc
@@ -0,0 +1,218 @@ +/* Script for -z combreloc: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x400000); . = 0x400000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (24, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xd b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xd new file mode 100644 index 0000000..45a0b74 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xd
@@ -0,0 +1,216 @@ +/* Script for ld -pie: link position independent executable */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0); . = 0 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (24, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xdc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xdc new file mode 100644 index 0000000..8f8705c --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xdc
@@ -0,0 +1,218 @@ +/* Script for -pie -z combreloc: position independent executable, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0); . = 0 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (24, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xdw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xdw new file mode 100644 index 0000000..c8ce9d6 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xdw
@@ -0,0 +1,217 @@ +/* Script for -pie -z combreloc -z now -z relro: position independent executable, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0); . = 0 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (0, .); + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xn new file mode 100644 index 0000000..fbaacc3 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xn
@@ -0,0 +1,216 @@ +/* Script for -n: mix text and data on same page */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x400000); . = 0x400000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (24, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xr b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xr new file mode 100644 index 0000000..533468e --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xr
@@ -0,0 +1,141 @@ +/* Script for ld -r: link without relocation */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rela.init 0 : { *(.rela.init) } + .rela.text 0 : { *(.rela.text) } + .rela.fini 0 : { *(.rela.fini) } + .rela.rodata 0 : { *(.rela.rodata) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rela.data 0 : { *(.rela.data) } + .rela.tdata 0 : { *(.rela.tdata) } + .rela.tbss 0 : { *(.rela.tbss) } + .rela.ctors 0 : { *(.rela.ctors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rela.got 0 : { *(.rela.got) } + .rela.bss 0 : { *(.rela.bss) } + .rela.ifunc 0 : { *(.rela.ifunc) } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt 0 : ALIGN(16) { *(.plt) *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got) *(.igot) } + .got.plt 0 : { *(.got.plt) *(.igot.plt) } + .data 0 : + { + *(.data) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xs b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xs new file mode 100644 index 0000000..4fa001c --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xs
@@ -0,0 +1,207 @@ +/* Script for ld --shared: link shared library */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (24, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xsc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xsc new file mode 100644 index 0000000..8470599 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xsc
@@ -0,0 +1,210 @@ +/* Script for --shared -z combreloc: shared library, combine & sort relocs */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (24, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xsw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xsw new file mode 100644 index 0000000..ce36429 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xsw
@@ -0,0 +1,208 @@ +/* Script for --shared -z combreloc -z now -z relro: shared library, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (0, .); + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xu b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xu new file mode 100644 index 0000000..ad19152 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xu
@@ -0,0 +1,142 @@ +/* Script for ld -Ur: link w/out relocation, do create constructors */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rela.init 0 : { *(.rela.init) } + .rela.text 0 : { *(.rela.text) } + .rela.fini 0 : { *(.rela.fini) } + .rela.rodata 0 : { *(.rela.rodata) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rela.data 0 : { *(.rela.data) } + .rela.tdata 0 : { *(.rela.tdata) } + .rela.tbss 0 : { *(.rela.tbss) } + .rela.ctors 0 : { *(.rela.ctors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rela.got 0 : { *(.rela.got) } + .rela.bss 0 : { *(.rela.bss) } + .rela.ifunc 0 : { *(.rela.ifunc) } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt 0 : ALIGN(16) { *(.plt) *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got) *(.igot) } + .got.plt 0 : { *(.got.plt) *(.igot.plt) } + .data 0 : + { + *(.data) + SORT(CONSTRUCTORS) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xw new file mode 100644 index 0000000..f0c6aec --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux.xw
@@ -0,0 +1,217 @@ +/* Script for -z combreloc -z now -z relro: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-littleaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x400000); . = 0x400000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (0, .); + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.x b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.x new file mode 100644 index 0000000..6457007 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.x
@@ -0,0 +1,217 @@ +/* Default linker script, for normal executables */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x400000); . = 0x400000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (12, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xbn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xbn new file mode 100644 index 0000000..c1e4121 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xbn
@@ -0,0 +1,214 @@ +/* Script for -N: mix text and data on same page; don't align data */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x400000); . = 0x400000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = .; + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xc new file mode 100644 index 0000000..9d5d7bf --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xc
@@ -0,0 +1,218 @@ +/* Script for -z combreloc: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x400000); . = 0x400000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (12, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xd b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xd new file mode 100644 index 0000000..445125e --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xd
@@ -0,0 +1,216 @@ +/* Script for ld -pie: link position independent executable */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0); . = 0 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (12, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xdc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xdc new file mode 100644 index 0000000..22b285b --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xdc
@@ -0,0 +1,218 @@ +/* Script for -pie -z combreloc: position independent executable, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0); . = 0 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (12, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xdw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xdw new file mode 100644 index 0000000..1be26b1 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xdw
@@ -0,0 +1,217 @@ +/* Script for -pie -z combreloc -z now -z relro: position independent executable, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0); . = 0 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (0, .); + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xn new file mode 100644 index 0000000..e1baea8 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xn
@@ -0,0 +1,216 @@ +/* Script for -n: mix text and data on same page */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x400000); . = 0x400000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (12, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xr b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xr new file mode 100644 index 0000000..911f381 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xr
@@ -0,0 +1,141 @@ +/* Script for ld -r: link without relocation */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rela.init 0 : { *(.rela.init) } + .rela.text 0 : { *(.rela.text) } + .rela.fini 0 : { *(.rela.fini) } + .rela.rodata 0 : { *(.rela.rodata) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rela.data 0 : { *(.rela.data) } + .rela.tdata 0 : { *(.rela.tdata) } + .rela.tbss 0 : { *(.rela.tbss) } + .rela.ctors 0 : { *(.rela.ctors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rela.got 0 : { *(.rela.got) } + .rela.bss 0 : { *(.rela.bss) } + .rela.ifunc 0 : { *(.rela.ifunc) } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt 0 : ALIGN(16) { *(.plt) *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got) *(.igot) } + .got.plt 0 : { *(.got.plt) *(.igot.plt) } + .data 0 : + { + *(.data) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xs b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xs new file mode 100644 index 0000000..e0af327 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xs
@@ -0,0 +1,207 @@ +/* Script for ld --shared: link shared library */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (12, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xsc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xsc new file mode 100644 index 0000000..51667be --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xsc
@@ -0,0 +1,210 @@ +/* Script for --shared -z combreloc: shared library, combine & sort relocs */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (12, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xsw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xsw new file mode 100644 index 0000000..8714ac5 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xsw
@@ -0,0 +1,208 @@ +/* Script for --shared -z combreloc -z now -z relro: shared library, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (0, .); + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xu b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xu new file mode 100644 index 0000000..489524e --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xu
@@ -0,0 +1,142 @@ +/* Script for ld -Ur: link w/out relocation, do create constructors */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rela.init 0 : { *(.rela.init) } + .rela.text 0 : { *(.rela.text) } + .rela.fini 0 : { *(.rela.fini) } + .rela.rodata 0 : { *(.rela.rodata) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rela.data 0 : { *(.rela.data) } + .rela.tdata 0 : { *(.rela.tdata) } + .rela.tbss 0 : { *(.rela.tbss) } + .rela.ctors 0 : { *(.rela.ctors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rela.got 0 : { *(.rela.got) } + .rela.bss 0 : { *(.rela.bss) } + .rela.ifunc 0 : { *(.rela.ifunc) } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt 0 : ALIGN(16) { *(.plt) *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got) *(.igot) } + .got.plt 0 : { *(.got.plt) *(.igot.plt) } + .data 0 : + { + *(.data) + SORT(CONSTRUCTORS) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xw new file mode 100644 index 0000000..35cad49 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32.xw
@@ -0,0 +1,217 @@ +/* Script for -z combreloc -z now -z relro: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littleaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x400000); . = 0x400000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (0, .); + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.x b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.x new file mode 100644 index 0000000..5b7f40e --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.x
@@ -0,0 +1,217 @@ +/* Default linker script, for normal executables */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x400000); . = 0x400000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (12, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xbn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xbn new file mode 100644 index 0000000..4fbbd1a --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xbn
@@ -0,0 +1,214 @@ +/* Script for -N: mix text and data on same page; don't align data */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x400000); . = 0x400000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = .; + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xc new file mode 100644 index 0000000..ef379e9 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xc
@@ -0,0 +1,218 @@ +/* Script for -z combreloc: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x400000); . = 0x400000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (12, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xd b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xd new file mode 100644 index 0000000..8b1a5f0 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xd
@@ -0,0 +1,216 @@ +/* Script for ld -pie: link position independent executable */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0); . = 0 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (12, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xdc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xdc new file mode 100644 index 0000000..4062425 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xdc
@@ -0,0 +1,218 @@ +/* Script for -pie -z combreloc: position independent executable, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0); . = 0 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (12, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xdw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xdw new file mode 100644 index 0000000..8d1a62a --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xdw
@@ -0,0 +1,217 @@ +/* Script for -pie -z combreloc -z now -z relro: position independent executable, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0); . = 0 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (0, .); + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xn new file mode 100644 index 0000000..f6bfbf6 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xn
@@ -0,0 +1,216 @@ +/* Script for -n: mix text and data on same page */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x400000); . = 0x400000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (12, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xr b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xr new file mode 100644 index 0000000..d820cb1 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xr
@@ -0,0 +1,141 @@ +/* Script for ld -r: link without relocation */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rela.init 0 : { *(.rela.init) } + .rela.text 0 : { *(.rela.text) } + .rela.fini 0 : { *(.rela.fini) } + .rela.rodata 0 : { *(.rela.rodata) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rela.data 0 : { *(.rela.data) } + .rela.tdata 0 : { *(.rela.tdata) } + .rela.tbss 0 : { *(.rela.tbss) } + .rela.ctors 0 : { *(.rela.ctors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rela.got 0 : { *(.rela.got) } + .rela.bss 0 : { *(.rela.bss) } + .rela.ifunc 0 : { *(.rela.ifunc) } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt 0 : ALIGN(16) { *(.plt) *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got) *(.igot) } + .got.plt 0 : { *(.got.plt) *(.igot.plt) } + .data 0 : + { + *(.data) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xs b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xs new file mode 100644 index 0000000..65c3f24 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xs
@@ -0,0 +1,207 @@ +/* Script for ld --shared: link shared library */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (12, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xsc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xsc new file mode 100644 index 0000000..3cd41b8 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xsc
@@ -0,0 +1,210 @@ +/* Script for --shared -z combreloc: shared library, combine & sort relocs */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (12, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xsw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xsw new file mode 100644 index 0000000..6d0a361 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xsw
@@ -0,0 +1,208 @@ +/* Script for --shared -z combreloc -z now -z relro: shared library, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (0, .); + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xu b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xu new file mode 100644 index 0000000..4b1f594 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xu
@@ -0,0 +1,142 @@ +/* Script for ld -Ur: link w/out relocation, do create constructors */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rela.init 0 : { *(.rela.init) } + .rela.text 0 : { *(.rela.text) } + .rela.fini 0 : { *(.rela.fini) } + .rela.rodata 0 : { *(.rela.rodata) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rela.data 0 : { *(.rela.data) } + .rela.tdata 0 : { *(.rela.tdata) } + .rela.tbss 0 : { *(.rela.tbss) } + .rela.ctors 0 : { *(.rela.ctors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rela.got 0 : { *(.rela.got) } + .rela.bss 0 : { *(.rela.bss) } + .rela.ifunc 0 : { *(.rela.ifunc) } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt 0 : ALIGN(16) { *(.plt) *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got) *(.igot) } + .got.plt 0 : { *(.got.plt) *(.igot.plt) } + .data 0 : + { + *(.data) + SORT(CONSTRUCTORS) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xw new file mode 100644 index 0000000..0169390 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linux32b.xw
@@ -0,0 +1,217 @@ +/* Script for -z combreloc -z now -z relro: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigaarch64", "elf32-bigaarch64", + "elf32-littleaarch64") +OUTPUT_ARCH(aarch64:ilp32) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x400000); . = 0x400000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (0, .); + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.x b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.x new file mode 100644 index 0000000..f695212 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.x
@@ -0,0 +1,217 @@ +/* Default linker script, for normal executables */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x400000); . = 0x400000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (24, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xbn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xbn new file mode 100644 index 0000000..ad53bc8 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xbn
@@ -0,0 +1,214 @@ +/* Script for -N: mix text and data on same page; don't align data */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x400000); . = 0x400000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = .; + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xc new file mode 100644 index 0000000..7bda7f5 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xc
@@ -0,0 +1,218 @@ +/* Script for -z combreloc: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x400000); . = 0x400000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (24, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xd b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xd new file mode 100644 index 0000000..bb186db --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xd
@@ -0,0 +1,216 @@ +/* Script for ld -pie: link position independent executable */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0); . = 0 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (24, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xdc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xdc new file mode 100644 index 0000000..cb25aaa --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xdc
@@ -0,0 +1,218 @@ +/* Script for -pie -z combreloc: position independent executable, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0); . = 0 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (24, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xdw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xdw new file mode 100644 index 0000000..27e3d36 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xdw
@@ -0,0 +1,217 @@ +/* Script for -pie -z combreloc -z now -z relro: position independent executable, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0); . = 0 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (0, .); + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xn new file mode 100644 index 0000000..851cb96 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xn
@@ -0,0 +1,216 @@ +/* Script for -n: mix text and data on same page */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x400000); . = 0x400000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (24, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xr b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xr new file mode 100644 index 0000000..6a304d5 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xr
@@ -0,0 +1,141 @@ +/* Script for ld -r: link without relocation */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rela.init 0 : { *(.rela.init) } + .rela.text 0 : { *(.rela.text) } + .rela.fini 0 : { *(.rela.fini) } + .rela.rodata 0 : { *(.rela.rodata) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rela.data 0 : { *(.rela.data) } + .rela.tdata 0 : { *(.rela.tdata) } + .rela.tbss 0 : { *(.rela.tbss) } + .rela.ctors 0 : { *(.rela.ctors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rela.got 0 : { *(.rela.got) } + .rela.bss 0 : { *(.rela.bss) } + .rela.ifunc 0 : { *(.rela.ifunc) } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt 0 : ALIGN(16) { *(.plt) *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got) *(.igot) } + .got.plt 0 : { *(.got.plt) *(.igot.plt) } + .data 0 : + { + *(.data) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xs b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xs new file mode 100644 index 0000000..7bd9007 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xs
@@ -0,0 +1,207 @@ +/* Script for ld --shared: link shared library */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.init : { *(.rela.init) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rela.fini : { *(.rela.fini) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rela.ctors : { *(.rela.ctors) } + .rela.dtors : { *(.rela.dtors) } + .rela.got : { *(.rela.got) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rela.ifunc : { *(.rela.ifunc) } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (24, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xsc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xsc new file mode 100644 index 0000000..6800d95 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xsc
@@ -0,0 +1,210 @@ +/* Script for --shared -z combreloc: shared library, combine & sort relocs */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (24, .); + .got.plt : { *(.got.plt) *(.igot.plt) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xsw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xsw new file mode 100644 index 0000000..355c484 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xsw
@@ -0,0 +1,208 @@ +/* Script for --shared -z combreloc -z now -z relro: shared library, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + *(.rela.iplt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (0, .); + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xu b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xu new file mode 100644 index 0000000..51057d1 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xu
@@ -0,0 +1,142 @@ +/* Script for ld -Ur: link w/out relocation, do create constructors */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rela.init 0 : { *(.rela.init) } + .rela.text 0 : { *(.rela.text) } + .rela.fini 0 : { *(.rela.fini) } + .rela.rodata 0 : { *(.rela.rodata) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rela.data 0 : { *(.rela.data) } + .rela.tdata 0 : { *(.rela.tdata) } + .rela.tbss 0 : { *(.rela.tbss) } + .rela.ctors 0 : { *(.rela.ctors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rela.got 0 : { *(.rela.got) } + .rela.bss 0 : { *(.rela.bss) } + .rela.ifunc 0 : { *(.rela.ifunc) } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt 0 : ALIGN(16) { *(.plt) *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got) *(.igot) } + .got.plt 0 : { *(.got.plt) *(.igot.plt) } + .data 0 : + { + *(.data) + SORT(CONSTRUCTORS) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xw new file mode 100644 index 0000000..0d056e0 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/aarch64linuxb.xw
@@ -0,0 +1,217 @@ +/* Script for -z combreloc -z now -z relro: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf64-bigaarch64", "elf64-bigaarch64", + "elf64-littleaarch64") +OUTPUT_ARCH(aarch64) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x400000); . = 0x400000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.ifunc) + } + .rela.plt : + { + *(.rela.plt) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } =0 + .plt : ALIGN(16) { *(.plt) *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } =0 + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } =0 + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(64 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (0, .); + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(64 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(64 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(64 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.x b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.x new file mode 100644 index 0000000..135617f --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.x
@@ -0,0 +1,249 @@ +/* Default linker script, for normal executables */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x8000); . = 0x8000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.init : { *(.rel.init) } + .rela.init : { *(.rela.init) } + .rel.text : { *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rel.fini : { *(.rel.fini) } + .rela.fini : { *(.rela.fini) } + .rel.rodata : { *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rel.data.rel.ro : { *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rel.data : { *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rel.tdata : { *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rel.tbss : { *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rel.ctors : { *(.rel.ctors) } + .rela.ctors : { *(.rela.ctors) } + .rel.dtors : { *(.rel.dtors) } + .rela.dtors : { *(.rela.dtors) } + .rel.got : { *(.rel.got) } + .rela.got : { *(.rela.got) } + .rel.bss : { *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rel.iplt : + { + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.iplt : + { + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xbn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xbn new file mode 100644 index 0000000..0e3c4d1 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xbn
@@ -0,0 +1,248 @@ +/* Script for -N: mix text and data on same page; don't align data */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x8000); . = 0x8000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.init : { *(.rel.init) } + .rela.init : { *(.rela.init) } + .rel.text : { *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rel.fini : { *(.rel.fini) } + .rela.fini : { *(.rela.fini) } + .rel.rodata : { *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rel.data.rel.ro : { *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rel.data : { *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rel.tdata : { *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rel.tbss : { *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rel.ctors : { *(.rel.ctors) } + .rela.ctors : { *(.rela.ctors) } + .rel.dtors : { *(.rel.dtors) } + .rela.dtors : { *(.rela.dtors) } + .rel.got : { *(.rel.got) } + .rela.got : { *(.rela.got) } + .rel.bss : { *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rel.iplt : + { + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.iplt : + { + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = .; + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xc new file mode 100644 index 0000000..63a2bad --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xc
@@ -0,0 +1,247 @@ +/* Script for -z combreloc: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x8000); . = 0x8000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.dyn : + { + *(.rel.init) + *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) + *(.rel.fini) + *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) + *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) + *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) + *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) + *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) + *(.rel.ctors) + *(.rel.dtors) + *(.rel.got) + *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xn new file mode 100644 index 0000000..1e720bc --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xn
@@ -0,0 +1,248 @@ +/* Script for -n: mix text and data on same page */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x8000); . = 0x8000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.init : { *(.rel.init) } + .rela.init : { *(.rela.init) } + .rel.text : { *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rel.fini : { *(.rel.fini) } + .rela.fini : { *(.rela.fini) } + .rel.rodata : { *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rel.data.rel.ro : { *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rel.data : { *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rel.tdata : { *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rel.tbss : { *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rel.ctors : { *(.rel.ctors) } + .rela.ctors : { *(.rela.ctors) } + .rel.dtors : { *(.rel.dtors) } + .rela.dtors : { *(.rela.dtors) } + .rel.got : { *(.rel.got) } + .rela.got : { *(.rela.got) } + .rel.bss : { *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rel.iplt : + { + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.iplt : + { + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xr b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xr new file mode 100644 index 0000000..29fc391 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xr
@@ -0,0 +1,170 @@ +/* Script for ld -r: link without relocation */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rel.init 0 : { *(.rel.init) } + .rela.init 0 : { *(.rela.init) } + .rel.text 0 : { *(.rel.text) } + .rela.text 0 : { *(.rela.text) } + .rel.fini 0 : { *(.rel.fini) } + .rela.fini 0 : { *(.rela.fini) } + .rel.rodata 0 : { *(.rel.rodata) } + .rela.rodata 0 : { *(.rela.rodata) } + .rel.data.rel.ro 0 : { *(.rel.data.rel.ro) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rel.data 0 : { *(.rel.data) } + .rela.data 0 : { *(.rela.data) } + .rel.tdata 0 : { *(.rel.tdata) } + .rela.tdata 0 : { *(.rela.tdata) } + .rel.tbss 0 : { *(.rel.tbss) } + .rela.tbss 0 : { *(.rela.tbss) } + .rel.ctors 0 : { *(.rel.ctors) } + .rela.ctors 0 : { *(.rela.ctors) } + .rel.dtors 0 : { *(.rel.dtors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rel.got 0 : { *(.rel.got) } + .rela.got 0 : { *(.rela.got) } + .rel.bss 0 : { *(.rel.bss) } + .rela.bss 0 : { *(.rela.bss) } + .rel.iplt 0 : + { + *(.rel.iplt) + } + .rela.iplt 0 : + { + *(.rela.iplt) + } + .rel.plt 0 : + { + *(.rel.plt) + } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } + .plt 0 : { *(.plt) } + .iplt 0 : { *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .ARM.extab 0 : { *(.ARM.extab) } + .ARM.exidx 0 : { *(.ARM.exidx) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data 0 : + { + *(.data) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0 : + { + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xs b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xs new file mode 100644 index 0000000..eff1d54 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xs
@@ -0,0 +1,237 @@ +/* Script for ld --shared: link shared library */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.init : { *(.rel.init) } + .rela.init : { *(.rela.init) } + .rel.text : { *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rel.fini : { *(.rel.fini) } + .rela.fini : { *(.rela.fini) } + .rel.rodata : { *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rel.data.rel.ro : { *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rel.data : { *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rel.tdata : { *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rel.tbss : { *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rel.ctors : { *(.rel.ctors) } + .rela.ctors : { *(.rela.ctors) } + .rel.dtors : { *(.rel.dtors) } + .rela.dtors : { *(.rela.dtors) } + .rel.got : { *(.rel.got) } + .rela.got : { *(.rela.got) } + .rel.bss : { *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rel.iplt : + { + *(.rel.iplt) + } + .rela.iplt : + { + *(.rela.iplt) + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xsc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xsc new file mode 100644 index 0000000..a7c4f0e --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xsc
@@ -0,0 +1,237 @@ +/* Script for --shared -z combreloc: shared library, combine & sort relocs */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.dyn : + { + *(.rel.init) + *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) + *(.rel.fini) + *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) + *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) + *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) + *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) + *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) + *(.rel.ctors) + *(.rel.dtors) + *(.rel.got) + *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) + *(.rel.iplt) + } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.iplt) + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xsw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xsw new file mode 100644 index 0000000..2342aaa --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xsw
@@ -0,0 +1,236 @@ +/* Script for --shared -z combreloc -z now -z relro: shared library, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.dyn : + { + *(.rel.init) + *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) + *(.rel.fini) + *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) + *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) + *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) + *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) + *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) + *(.rel.ctors) + *(.rel.dtors) + *(.rel.got) + *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) + *(.rel.iplt) + } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.iplt) + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xu b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xu new file mode 100644 index 0000000..a12dc39 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xu
@@ -0,0 +1,171 @@ +/* Script for ld -Ur: link w/out relocation, do create constructors */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rel.init 0 : { *(.rel.init) } + .rela.init 0 : { *(.rela.init) } + .rel.text 0 : { *(.rel.text) } + .rela.text 0 : { *(.rela.text) } + .rel.fini 0 : { *(.rel.fini) } + .rela.fini 0 : { *(.rela.fini) } + .rel.rodata 0 : { *(.rel.rodata) } + .rela.rodata 0 : { *(.rela.rodata) } + .rel.data.rel.ro 0 : { *(.rel.data.rel.ro) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rel.data 0 : { *(.rel.data) } + .rela.data 0 : { *(.rela.data) } + .rel.tdata 0 : { *(.rel.tdata) } + .rela.tdata 0 : { *(.rela.tdata) } + .rel.tbss 0 : { *(.rel.tbss) } + .rela.tbss 0 : { *(.rela.tbss) } + .rel.ctors 0 : { *(.rel.ctors) } + .rela.ctors 0 : { *(.rela.ctors) } + .rel.dtors 0 : { *(.rel.dtors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rel.got 0 : { *(.rel.got) } + .rela.got 0 : { *(.rela.got) } + .rel.bss 0 : { *(.rel.bss) } + .rela.bss 0 : { *(.rela.bss) } + .rel.iplt 0 : + { + *(.rel.iplt) + } + .rela.iplt 0 : + { + *(.rela.iplt) + } + .rel.plt 0 : + { + *(.rel.plt) + } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } + .plt 0 : { *(.plt) } + .iplt 0 : { *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .ARM.extab 0 : { *(.ARM.extab) } + .ARM.exidx 0 : { *(.ARM.exidx) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data 0 : + { + *(.data) + SORT(CONSTRUCTORS) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0 : + { + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xw new file mode 100644 index 0000000..58f574e --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf.xw
@@ -0,0 +1,247 @@ +/* Script for -z combreloc -z now -z relro: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x8000); . = 0x8000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.dyn : + { + *(.rel.init) + *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) + *(.rel.fini) + *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) + *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) + *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) + *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) + *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) + *(.rel.ctors) + *(.rel.dtors) + *(.rel.got) + *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.x b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.x new file mode 100644 index 0000000..04d195b --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.x
@@ -0,0 +1,246 @@ +/* Default linker script, for normal executables */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00010000); . = 0x00010000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.init : { *(.rel.init) } + .rela.init : { *(.rela.init) } + .rel.text : { *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rel.fini : { *(.rel.fini) } + .rela.fini : { *(.rela.fini) } + .rel.rodata : { *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rel.data.rel.ro : { *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rel.data : { *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rel.tdata : { *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rel.tbss : { *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rel.ctors : { *(.rel.ctors) } + .rela.ctors : { *(.rela.ctors) } + .rel.dtors : { *(.rel.dtors) } + .rela.dtors : { *(.rela.dtors) } + .rel.got : { *(.rel.got) } + .rela.got : { *(.rela.got) } + .rel.bss : { *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rel.iplt : + { + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.iplt : + { + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + . = DATA_SEGMENT_RELRO_END (0, .); + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xbn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xbn new file mode 100644 index 0000000..4a05e8a --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xbn
@@ -0,0 +1,243 @@ +/* Script for -N: mix text and data on same page; don't align data */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00010000); . = 0x00010000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.init : { *(.rel.init) } + .rela.init : { *(.rela.init) } + .rel.text : { *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rel.fini : { *(.rel.fini) } + .rela.fini : { *(.rela.fini) } + .rel.rodata : { *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rel.data.rel.ro : { *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rel.data : { *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rel.tdata : { *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rel.tbss : { *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rel.ctors : { *(.rel.ctors) } + .rela.ctors : { *(.rela.ctors) } + .rel.dtors : { *(.rel.dtors) } + .rela.dtors : { *(.rela.dtors) } + .rel.got : { *(.rel.got) } + .rela.got : { *(.rela.got) } + .rel.bss : { *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rel.iplt : + { + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.iplt : + { + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = .; + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xc new file mode 100644 index 0000000..2aa9174 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xc
@@ -0,0 +1,244 @@ +/* Script for -z combreloc: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00010000); . = 0x00010000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.dyn : + { + *(.rel.init) + *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) + *(.rel.fini) + *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) + *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) + *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) + *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) + *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) + *(.rel.ctors) + *(.rel.dtors) + *(.rel.got) + *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + . = DATA_SEGMENT_RELRO_END (0, .); + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xd b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xd new file mode 100644 index 0000000..a49e267 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xd
@@ -0,0 +1,245 @@ +/* Script for ld -pie: link position independent executable */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0); . = 0 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.init : { *(.rel.init) } + .rela.init : { *(.rela.init) } + .rel.text : { *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rel.fini : { *(.rel.fini) } + .rela.fini : { *(.rela.fini) } + .rel.rodata : { *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rel.data.rel.ro : { *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rel.data : { *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rel.tdata : { *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rel.tbss : { *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rel.ctors : { *(.rel.ctors) } + .rela.ctors : { *(.rela.ctors) } + .rel.dtors : { *(.rel.dtors) } + .rela.dtors : { *(.rela.dtors) } + .rel.got : { *(.rel.got) } + .rela.got : { *(.rela.got) } + .rel.bss : { *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rel.iplt : + { + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.iplt : + { + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + . = DATA_SEGMENT_RELRO_END (0, .); + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xdc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xdc new file mode 100644 index 0000000..f90f0b3 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xdc
@@ -0,0 +1,244 @@ +/* Script for -pie -z combreloc: position independent executable, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0); . = 0 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.dyn : + { + *(.rel.init) + *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) + *(.rel.fini) + *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) + *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) + *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) + *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) + *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) + *(.rel.ctors) + *(.rel.dtors) + *(.rel.got) + *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + . = DATA_SEGMENT_RELRO_END (0, .); + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xdw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xdw new file mode 100644 index 0000000..056f45a --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xdw
@@ -0,0 +1,244 @@ +/* Script for -pie -z combreloc -z now -z relro: position independent executable, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0); . = 0 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.dyn : + { + *(.rel.init) + *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) + *(.rel.fini) + *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) + *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) + *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) + *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) + *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) + *(.rel.ctors) + *(.rel.dtors) + *(.rel.got) + *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (0, .); + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xn new file mode 100644 index 0000000..5e64b79 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xn
@@ -0,0 +1,245 @@ +/* Script for -n: mix text and data on same page */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00010000); . = 0x00010000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.init : { *(.rel.init) } + .rela.init : { *(.rela.init) } + .rel.text : { *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rel.fini : { *(.rel.fini) } + .rela.fini : { *(.rela.fini) } + .rel.rodata : { *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rel.data.rel.ro : { *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rel.data : { *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rel.tdata : { *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rel.tbss : { *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rel.ctors : { *(.rel.ctors) } + .rela.ctors : { *(.rela.ctors) } + .rel.dtors : { *(.rel.dtors) } + .rela.dtors : { *(.rela.dtors) } + .rel.got : { *(.rel.got) } + .rela.got : { *(.rela.got) } + .rel.bss : { *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rel.iplt : + { + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.iplt : + { + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + . = DATA_SEGMENT_RELRO_END (0, .); + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xr b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xr new file mode 100644 index 0000000..d48d4ec --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xr
@@ -0,0 +1,166 @@ +/* Script for ld -r: link without relocation */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rel.init 0 : { *(.rel.init) } + .rela.init 0 : { *(.rela.init) } + .rel.text 0 : { *(.rel.text) } + .rela.text 0 : { *(.rela.text) } + .rel.fini 0 : { *(.rel.fini) } + .rela.fini 0 : { *(.rela.fini) } + .rel.rodata 0 : { *(.rel.rodata) } + .rela.rodata 0 : { *(.rela.rodata) } + .rel.data.rel.ro 0 : { *(.rel.data.rel.ro) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rel.data 0 : { *(.rel.data) } + .rela.data 0 : { *(.rela.data) } + .rel.tdata 0 : { *(.rel.tdata) } + .rela.tdata 0 : { *(.rela.tdata) } + .rel.tbss 0 : { *(.rel.tbss) } + .rela.tbss 0 : { *(.rela.tbss) } + .rel.ctors 0 : { *(.rel.ctors) } + .rela.ctors 0 : { *(.rela.ctors) } + .rel.dtors 0 : { *(.rel.dtors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rel.got 0 : { *(.rel.got) } + .rela.got 0 : { *(.rela.got) } + .rel.bss 0 : { *(.rel.bss) } + .rela.bss 0 : { *(.rela.bss) } + .rel.iplt 0 : + { + *(.rel.iplt) + } + .rela.iplt 0 : + { + *(.rela.iplt) + } + .rel.plt 0 : + { + *(.rel.plt) + } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } + .plt 0 : { *(.plt) } + .iplt 0 : { *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .ARM.extab 0 : { *(.ARM.extab) } + .ARM.exidx 0 : { *(.ARM.exidx) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data 0 : + { + *(.data) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xs b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xs new file mode 100644 index 0000000..ef80780 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xs
@@ -0,0 +1,234 @@ +/* Script for ld --shared: link shared library */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.init : { *(.rel.init) } + .rela.init : { *(.rela.init) } + .rel.text : { *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rel.fini : { *(.rel.fini) } + .rela.fini : { *(.rela.fini) } + .rel.rodata : { *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rel.data.rel.ro : { *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rel.data : { *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rel.tdata : { *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rel.tbss : { *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rel.ctors : { *(.rel.ctors) } + .rela.ctors : { *(.rela.ctors) } + .rel.dtors : { *(.rel.dtors) } + .rela.dtors : { *(.rela.dtors) } + .rel.got : { *(.rel.got) } + .rela.got : { *(.rela.got) } + .rel.bss : { *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rel.iplt : + { + *(.rel.iplt) + } + .rela.iplt : + { + *(.rela.iplt) + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + . = DATA_SEGMENT_RELRO_END (0, .); + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xsc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xsc new file mode 100644 index 0000000..fcc8641 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xsc
@@ -0,0 +1,234 @@ +/* Script for --shared -z combreloc: shared library, combine & sort relocs */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.dyn : + { + *(.rel.init) + *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) + *(.rel.fini) + *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) + *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) + *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) + *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) + *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) + *(.rel.ctors) + *(.rel.dtors) + *(.rel.got) + *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) + *(.rel.iplt) + } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.iplt) + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + . = DATA_SEGMENT_RELRO_END (0, .); + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xsw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xsw new file mode 100644 index 0000000..c89c021 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xsw
@@ -0,0 +1,233 @@ +/* Script for --shared -z combreloc -z now -z relro: shared library, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.dyn : + { + *(.rel.init) + *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) + *(.rel.fini) + *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) + *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) + *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) + *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) + *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) + *(.rel.ctors) + *(.rel.dtors) + *(.rel.got) + *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) + *(.rel.iplt) + } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.iplt) + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (0, .); + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xu b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xu new file mode 100644 index 0000000..52d639e --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xu
@@ -0,0 +1,167 @@ +/* Script for ld -Ur: link w/out relocation, do create constructors */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rel.init 0 : { *(.rel.init) } + .rela.init 0 : { *(.rela.init) } + .rel.text 0 : { *(.rel.text) } + .rela.text 0 : { *(.rela.text) } + .rel.fini 0 : { *(.rel.fini) } + .rela.fini 0 : { *(.rela.fini) } + .rel.rodata 0 : { *(.rel.rodata) } + .rela.rodata 0 : { *(.rela.rodata) } + .rel.data.rel.ro 0 : { *(.rel.data.rel.ro) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rel.data 0 : { *(.rel.data) } + .rela.data 0 : { *(.rela.data) } + .rel.tdata 0 : { *(.rel.tdata) } + .rela.tdata 0 : { *(.rela.tdata) } + .rel.tbss 0 : { *(.rel.tbss) } + .rela.tbss 0 : { *(.rela.tbss) } + .rel.ctors 0 : { *(.rel.ctors) } + .rela.ctors 0 : { *(.rela.ctors) } + .rel.dtors 0 : { *(.rel.dtors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rel.got 0 : { *(.rel.got) } + .rela.got 0 : { *(.rela.got) } + .rel.bss 0 : { *(.rel.bss) } + .rela.bss 0 : { *(.rela.bss) } + .rel.iplt 0 : + { + *(.rel.iplt) + } + .rela.iplt 0 : + { + *(.rela.iplt) + } + .rel.plt 0 : + { + *(.rel.plt) + } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } + .plt 0 : { *(.plt) } + .iplt 0 : { *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .ARM.extab 0 : { *(.ARM.extab) } + .ARM.exidx 0 : { *(.ARM.exidx) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data 0 : + { + *(.data) + SORT(CONSTRUCTORS) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xw new file mode 100644 index 0000000..f7418bd --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelf_linux_eabi.xw
@@ -0,0 +1,244 @@ +/* Script for -z combreloc -z now -z relro: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00010000); . = 0x00010000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.dyn : + { + *(.rel.init) + *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) + *(.rel.fini) + *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) + *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) + *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) + *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) + *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) + *(.rel.ctors) + *(.rel.dtors) + *(.rel.got) + *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (0, .); + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.x b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.x new file mode 100644 index 0000000..e96d185 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.x
@@ -0,0 +1,249 @@ +/* Default linker script, for normal executables */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x8000); . = 0x8000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.init : { *(.rel.init) } + .rela.init : { *(.rela.init) } + .rel.text : { *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rel.fini : { *(.rel.fini) } + .rela.fini : { *(.rela.fini) } + .rel.rodata : { *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rel.data.rel.ro : { *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rel.data : { *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rel.tdata : { *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rel.tbss : { *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rel.ctors : { *(.rel.ctors) } + .rela.ctors : { *(.rela.ctors) } + .rel.dtors : { *(.rel.dtors) } + .rela.dtors : { *(.rela.dtors) } + .rel.got : { *(.rel.got) } + .rela.got : { *(.rela.got) } + .rel.bss : { *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rel.iplt : + { + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.iplt : + { + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xbn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xbn new file mode 100644 index 0000000..2ab9f01 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xbn
@@ -0,0 +1,248 @@ +/* Script for -N: mix text and data on same page; don't align data */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x8000); . = 0x8000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.init : { *(.rel.init) } + .rela.init : { *(.rela.init) } + .rel.text : { *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rel.fini : { *(.rel.fini) } + .rela.fini : { *(.rela.fini) } + .rel.rodata : { *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rel.data.rel.ro : { *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rel.data : { *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rel.tdata : { *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rel.tbss : { *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rel.ctors : { *(.rel.ctors) } + .rela.ctors : { *(.rela.ctors) } + .rel.dtors : { *(.rel.dtors) } + .rela.dtors : { *(.rela.dtors) } + .rel.got : { *(.rel.got) } + .rela.got : { *(.rela.got) } + .rel.bss : { *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rel.iplt : + { + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.iplt : + { + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = .; + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xc new file mode 100644 index 0000000..60b9250 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xc
@@ -0,0 +1,247 @@ +/* Script for -z combreloc: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x8000); . = 0x8000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.dyn : + { + *(.rel.init) + *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) + *(.rel.fini) + *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) + *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) + *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) + *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) + *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) + *(.rel.ctors) + *(.rel.dtors) + *(.rel.got) + *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xn new file mode 100644 index 0000000..a822335 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xn
@@ -0,0 +1,248 @@ +/* Script for -n: mix text and data on same page */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x8000); . = 0x8000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.init : { *(.rel.init) } + .rela.init : { *(.rela.init) } + .rel.text : { *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rel.fini : { *(.rel.fini) } + .rela.fini : { *(.rela.fini) } + .rel.rodata : { *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rel.data.rel.ro : { *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rel.data : { *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rel.tdata : { *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rel.tbss : { *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rel.ctors : { *(.rel.ctors) } + .rela.ctors : { *(.rela.ctors) } + .rel.dtors : { *(.rel.dtors) } + .rela.dtors : { *(.rela.dtors) } + .rel.got : { *(.rel.got) } + .rela.got : { *(.rela.got) } + .rel.bss : { *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rel.iplt : + { + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.iplt : + { + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xr b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xr new file mode 100644 index 0000000..23c7c50 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xr
@@ -0,0 +1,170 @@ +/* Script for ld -r: link without relocation */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rel.init 0 : { *(.rel.init) } + .rela.init 0 : { *(.rela.init) } + .rel.text 0 : { *(.rel.text) } + .rela.text 0 : { *(.rela.text) } + .rel.fini 0 : { *(.rel.fini) } + .rela.fini 0 : { *(.rela.fini) } + .rel.rodata 0 : { *(.rel.rodata) } + .rela.rodata 0 : { *(.rela.rodata) } + .rel.data.rel.ro 0 : { *(.rel.data.rel.ro) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rel.data 0 : { *(.rel.data) } + .rela.data 0 : { *(.rela.data) } + .rel.tdata 0 : { *(.rel.tdata) } + .rela.tdata 0 : { *(.rela.tdata) } + .rel.tbss 0 : { *(.rel.tbss) } + .rela.tbss 0 : { *(.rela.tbss) } + .rel.ctors 0 : { *(.rel.ctors) } + .rela.ctors 0 : { *(.rela.ctors) } + .rel.dtors 0 : { *(.rel.dtors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rel.got 0 : { *(.rel.got) } + .rela.got 0 : { *(.rela.got) } + .rel.bss 0 : { *(.rel.bss) } + .rela.bss 0 : { *(.rela.bss) } + .rel.iplt 0 : + { + *(.rel.iplt) + } + .rela.iplt 0 : + { + *(.rela.iplt) + } + .rel.plt 0 : + { + *(.rel.plt) + } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } + .plt 0 : { *(.plt) } + .iplt 0 : { *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .ARM.extab 0 : { *(.ARM.extab) } + .ARM.exidx 0 : { *(.ARM.exidx) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data 0 : + { + *(.data) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0 : + { + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xs b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xs new file mode 100644 index 0000000..8d321af --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xs
@@ -0,0 +1,237 @@ +/* Script for ld --shared: link shared library */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.init : { *(.rel.init) } + .rela.init : { *(.rela.init) } + .rel.text : { *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rel.fini : { *(.rel.fini) } + .rela.fini : { *(.rela.fini) } + .rel.rodata : { *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rel.data.rel.ro : { *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rel.data : { *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rel.tdata : { *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rel.tbss : { *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rel.ctors : { *(.rel.ctors) } + .rela.ctors : { *(.rela.ctors) } + .rel.dtors : { *(.rel.dtors) } + .rela.dtors : { *(.rela.dtors) } + .rel.got : { *(.rel.got) } + .rela.got : { *(.rela.got) } + .rel.bss : { *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rel.iplt : + { + *(.rel.iplt) + } + .rela.iplt : + { + *(.rela.iplt) + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xsc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xsc new file mode 100644 index 0000000..c1286d5 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xsc
@@ -0,0 +1,237 @@ +/* Script for --shared -z combreloc: shared library, combine & sort relocs */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.dyn : + { + *(.rel.init) + *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) + *(.rel.fini) + *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) + *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) + *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) + *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) + *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) + *(.rel.ctors) + *(.rel.dtors) + *(.rel.got) + *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) + *(.rel.iplt) + } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.iplt) + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xsw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xsw new file mode 100644 index 0000000..2fde079 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xsw
@@ -0,0 +1,236 @@ +/* Script for --shared -z combreloc -z now -z relro: shared library, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.dyn : + { + *(.rel.init) + *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) + *(.rel.fini) + *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) + *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) + *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) + *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) + *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) + *(.rel.ctors) + *(.rel.dtors) + *(.rel.got) + *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) + *(.rel.iplt) + } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.iplt) + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xu b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xu new file mode 100644 index 0000000..b1516f6 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xu
@@ -0,0 +1,171 @@ +/* Script for ld -Ur: link w/out relocation, do create constructors */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rel.init 0 : { *(.rel.init) } + .rela.init 0 : { *(.rela.init) } + .rel.text 0 : { *(.rel.text) } + .rela.text 0 : { *(.rela.text) } + .rel.fini 0 : { *(.rel.fini) } + .rela.fini 0 : { *(.rela.fini) } + .rel.rodata 0 : { *(.rel.rodata) } + .rela.rodata 0 : { *(.rela.rodata) } + .rel.data.rel.ro 0 : { *(.rel.data.rel.ro) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rel.data 0 : { *(.rel.data) } + .rela.data 0 : { *(.rela.data) } + .rel.tdata 0 : { *(.rel.tdata) } + .rela.tdata 0 : { *(.rela.tdata) } + .rel.tbss 0 : { *(.rel.tbss) } + .rela.tbss 0 : { *(.rela.tbss) } + .rel.ctors 0 : { *(.rel.ctors) } + .rela.ctors 0 : { *(.rela.ctors) } + .rel.dtors 0 : { *(.rel.dtors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rel.got 0 : { *(.rel.got) } + .rela.got 0 : { *(.rela.got) } + .rel.bss 0 : { *(.rel.bss) } + .rela.bss 0 : { *(.rela.bss) } + .rel.iplt 0 : + { + *(.rel.iplt) + } + .rela.iplt 0 : + { + *(.rela.iplt) + } + .rel.plt 0 : + { + *(.rel.plt) + } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } + .plt 0 : { *(.plt) } + .iplt 0 : { *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .ARM.extab 0 : { *(.ARM.extab) } + .ARM.exidx 0 : { *(.ARM.exidx) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data 0 : + { + *(.data) + SORT(CONSTRUCTORS) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0 : + { + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xw new file mode 100644 index 0000000..f106863 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb.xw
@@ -0,0 +1,247 @@ +/* Script for -z combreloc -z now -z relro: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x8000); . = 0x8000; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.dyn : + { + *(.rel.init) + *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) + *(.rel.fini) + *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) + *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) + *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) + *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) + *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) + *(.rel.ctors) + *(.rel.dtors) + *(.rel.got) + *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + __data_start = . ; + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .stack 0x80000 : + { + _stack = .; + *(.stack) + } + .ARM.attributes 0 : { KEEP (*(.ARM.attributes)) KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.x b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.x new file mode 100644 index 0000000..80f3ea8 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.x
@@ -0,0 +1,246 @@ +/* Default linker script, for normal executables */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00010000); . = 0x00010000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.init : { *(.rel.init) } + .rela.init : { *(.rela.init) } + .rel.text : { *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rel.fini : { *(.rel.fini) } + .rela.fini : { *(.rela.fini) } + .rel.rodata : { *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rel.data.rel.ro : { *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rel.data : { *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rel.tdata : { *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rel.tbss : { *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rel.ctors : { *(.rel.ctors) } + .rela.ctors : { *(.rela.ctors) } + .rel.dtors : { *(.rel.dtors) } + .rela.dtors : { *(.rela.dtors) } + .rel.got : { *(.rel.got) } + .rela.got : { *(.rela.got) } + .rel.bss : { *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rel.iplt : + { + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.iplt : + { + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + . = DATA_SEGMENT_RELRO_END (0, .); + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xbn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xbn new file mode 100644 index 0000000..fdda7f3 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xbn
@@ -0,0 +1,243 @@ +/* Script for -N: mix text and data on same page; don't align data */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00010000); . = 0x00010000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.init : { *(.rel.init) } + .rela.init : { *(.rela.init) } + .rel.text : { *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rel.fini : { *(.rel.fini) } + .rela.fini : { *(.rela.fini) } + .rel.rodata : { *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rel.data.rel.ro : { *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rel.data : { *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rel.tdata : { *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rel.tbss : { *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rel.ctors : { *(.rel.ctors) } + .rela.ctors : { *(.rela.ctors) } + .rel.dtors : { *(.rel.dtors) } + .rela.dtors : { *(.rela.dtors) } + .rel.got : { *(.rel.got) } + .rela.got : { *(.rela.got) } + .rel.bss : { *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rel.iplt : + { + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.iplt : + { + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = .; + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xc new file mode 100644 index 0000000..1bf56a2 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xc
@@ -0,0 +1,244 @@ +/* Script for -z combreloc: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00010000); . = 0x00010000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.dyn : + { + *(.rel.init) + *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) + *(.rel.fini) + *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) + *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) + *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) + *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) + *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) + *(.rel.ctors) + *(.rel.dtors) + *(.rel.got) + *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + . = DATA_SEGMENT_RELRO_END (0, .); + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xd b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xd new file mode 100644 index 0000000..2600399 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xd
@@ -0,0 +1,245 @@ +/* Script for ld -pie: link position independent executable */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0); . = 0 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.init : { *(.rel.init) } + .rela.init : { *(.rela.init) } + .rel.text : { *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rel.fini : { *(.rel.fini) } + .rela.fini : { *(.rela.fini) } + .rel.rodata : { *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rel.data.rel.ro : { *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rel.data : { *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rel.tdata : { *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rel.tbss : { *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rel.ctors : { *(.rel.ctors) } + .rela.ctors : { *(.rela.ctors) } + .rel.dtors : { *(.rel.dtors) } + .rela.dtors : { *(.rela.dtors) } + .rel.got : { *(.rel.got) } + .rela.got : { *(.rela.got) } + .rel.bss : { *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rel.iplt : + { + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.iplt : + { + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + . = DATA_SEGMENT_RELRO_END (0, .); + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xdc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xdc new file mode 100644 index 0000000..c1b5373 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xdc
@@ -0,0 +1,244 @@ +/* Script for -pie -z combreloc: position independent executable, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0); . = 0 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.dyn : + { + *(.rel.init) + *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) + *(.rel.fini) + *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) + *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) + *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) + *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) + *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) + *(.rel.ctors) + *(.rel.dtors) + *(.rel.got) + *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + . = DATA_SEGMENT_RELRO_END (0, .); + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xdw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xdw new file mode 100644 index 0000000..bf01d44 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xdw
@@ -0,0 +1,244 @@ +/* Script for -pie -z combreloc -z now -z relro: position independent executable, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0); . = 0 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.dyn : + { + *(.rel.init) + *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) + *(.rel.fini) + *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) + *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) + *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) + *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) + *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) + *(.rel.ctors) + *(.rel.dtors) + *(.rel.got) + *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (0, .); + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xn b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xn new file mode 100644 index 0000000..cc38f15 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xn
@@ -0,0 +1,245 @@ +/* Script for -n: mix text and data on same page */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00010000); . = 0x00010000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.init : { *(.rel.init) } + .rela.init : { *(.rela.init) } + .rel.text : { *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rel.fini : { *(.rel.fini) } + .rela.fini : { *(.rela.fini) } + .rel.rodata : { *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rel.data.rel.ro : { *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rel.data : { *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rel.tdata : { *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rel.tbss : { *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rel.ctors : { *(.rel.ctors) } + .rela.ctors : { *(.rela.ctors) } + .rel.dtors : { *(.rel.dtors) } + .rela.dtors : { *(.rela.dtors) } + .rel.got : { *(.rel.got) } + .rela.got : { *(.rela.got) } + .rel.bss : { *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rel.iplt : + { + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.iplt : + { + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + . = DATA_SEGMENT_RELRO_END (0, .); + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xr b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xr new file mode 100644 index 0000000..c3b0497 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xr
@@ -0,0 +1,166 @@ +/* Script for ld -r: link without relocation */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rel.init 0 : { *(.rel.init) } + .rela.init 0 : { *(.rela.init) } + .rel.text 0 : { *(.rel.text) } + .rela.text 0 : { *(.rela.text) } + .rel.fini 0 : { *(.rel.fini) } + .rela.fini 0 : { *(.rela.fini) } + .rel.rodata 0 : { *(.rel.rodata) } + .rela.rodata 0 : { *(.rela.rodata) } + .rel.data.rel.ro 0 : { *(.rel.data.rel.ro) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rel.data 0 : { *(.rel.data) } + .rela.data 0 : { *(.rela.data) } + .rel.tdata 0 : { *(.rel.tdata) } + .rela.tdata 0 : { *(.rela.tdata) } + .rel.tbss 0 : { *(.rel.tbss) } + .rela.tbss 0 : { *(.rela.tbss) } + .rel.ctors 0 : { *(.rel.ctors) } + .rela.ctors 0 : { *(.rela.ctors) } + .rel.dtors 0 : { *(.rel.dtors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rel.got 0 : { *(.rel.got) } + .rela.got 0 : { *(.rela.got) } + .rel.bss 0 : { *(.rel.bss) } + .rela.bss 0 : { *(.rela.bss) } + .rel.iplt 0 : + { + *(.rel.iplt) + } + .rela.iplt 0 : + { + *(.rela.iplt) + } + .rel.plt 0 : + { + *(.rel.plt) + } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } + .plt 0 : { *(.plt) } + .iplt 0 : { *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .ARM.extab 0 : { *(.ARM.extab) } + .ARM.exidx 0 : { *(.ARM.exidx) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data 0 : + { + *(.data) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xs b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xs new file mode 100644 index 0000000..ec4930d --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xs
@@ -0,0 +1,234 @@ +/* Script for ld --shared: link shared library */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.init : { *(.rel.init) } + .rela.init : { *(.rela.init) } + .rel.text : { *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) } + .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) } + .rel.fini : { *(.rel.fini) } + .rela.fini : { *(.rela.fini) } + .rel.rodata : { *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) } + .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) } + .rel.data.rel.ro : { *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) } + .rela.data.rel.ro : { *(.rela.data.rel.ro .rela.data.rel.ro.* .rela.gnu.linkonce.d.rel.ro.*) } + .rel.data : { *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) } + .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) } + .rel.tdata : { *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) } + .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) } + .rel.tbss : { *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) } + .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) } + .rel.ctors : { *(.rel.ctors) } + .rela.ctors : { *(.rela.ctors) } + .rel.dtors : { *(.rel.dtors) } + .rela.dtors : { *(.rela.dtors) } + .rel.got : { *(.rel.got) } + .rela.got : { *(.rela.got) } + .rel.bss : { *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) } + .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) } + .rel.iplt : + { + *(.rel.iplt) + } + .rela.iplt : + { + *(.rela.iplt) + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + . = DATA_SEGMENT_RELRO_END (0, .); + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xsc b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xsc new file mode 100644 index 0000000..f2a0b09 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xsc
@@ -0,0 +1,234 @@ +/* Script for --shared -z combreloc: shared library, combine & sort relocs */ +/* Modified for Android. */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.dyn : + { + *(.rel.init) + *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) + *(.rel.fini) + *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) + *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) + *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) + *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) + *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) + *(.rel.ctors) + *(.rel.dtors) + *(.rel.got) + *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) + *(.rel.iplt) + } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.iplt) + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + . = DATA_SEGMENT_RELRO_END (0, .); + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xsw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xsw new file mode 100644 index 0000000..aaab571 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xsw
@@ -0,0 +1,233 @@ +/* Script for --shared -z combreloc -z now -z relro: shared library, combine & sort relocs */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + . = 0 + SIZEOF_HEADERS; + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.dyn : + { + *(.rel.init) + *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) + *(.rel.fini) + *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) + *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) + *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) + *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) + *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) + *(.rel.ctors) + *(.rel.dtors) + *(.rel.got) + *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) + *(.rel.iplt) + } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + *(.rela.iplt) + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (0, .); + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xu b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xu new file mode 100644 index 0000000..7de1661 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xu
@@ -0,0 +1,167 @@ +/* Script for ld -Ur: link w/out relocation, do create constructors */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) + /* For some reason, the Solaris linker makes bad executables + if gld -r is used and the intermediate file has sections starting + at non-zero addresses. Could be a Solaris ld bug, could be a GNU ld + bug. But for now assigning the zero vmas works. */ +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + .interp 0 : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash 0 : { *(.hash) } + .gnu.hash 0 : { *(.gnu.hash) } + .dynsym 0 : { *(.dynsym) } + .dynstr 0 : { *(.dynstr) } + .gnu.version 0 : { *(.gnu.version) } + .gnu.version_d 0: { *(.gnu.version_d) } + .gnu.version_r 0: { *(.gnu.version_r) } + .rel.init 0 : { *(.rel.init) } + .rela.init 0 : { *(.rela.init) } + .rel.text 0 : { *(.rel.text) } + .rela.text 0 : { *(.rela.text) } + .rel.fini 0 : { *(.rel.fini) } + .rela.fini 0 : { *(.rela.fini) } + .rel.rodata 0 : { *(.rel.rodata) } + .rela.rodata 0 : { *(.rela.rodata) } + .rel.data.rel.ro 0 : { *(.rel.data.rel.ro) } + .rela.data.rel.ro 0 : { *(.rela.data.rel.ro) } + .rel.data 0 : { *(.rel.data) } + .rela.data 0 : { *(.rela.data) } + .rel.tdata 0 : { *(.rel.tdata) } + .rela.tdata 0 : { *(.rela.tdata) } + .rel.tbss 0 : { *(.rel.tbss) } + .rela.tbss 0 : { *(.rela.tbss) } + .rel.ctors 0 : { *(.rel.ctors) } + .rela.ctors 0 : { *(.rela.ctors) } + .rel.dtors 0 : { *(.rel.dtors) } + .rela.dtors 0 : { *(.rela.dtors) } + .rel.got 0 : { *(.rel.got) } + .rela.got 0 : { *(.rela.got) } + .rel.bss 0 : { *(.rel.bss) } + .rela.bss 0 : { *(.rela.bss) } + .rel.iplt 0 : + { + *(.rel.iplt) + } + .rela.iplt 0 : + { + *(.rela.iplt) + } + .rel.plt 0 : + { + *(.rel.plt) + } + .rela.plt 0 : + { + *(.rela.plt) + } + .init 0 : + { + KEEP (*(SORT_NONE(.init))) + } + .plt 0 : { *(.plt) } + .iplt 0 : { *(.iplt) } + .text 0 : + { + *(.text .stub) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + } + .fini 0 : + { + KEEP (*(SORT_NONE(.fini))) + } + .rodata 0 : { *(.rodata) } + .rodata1 0 : { *(.rodata1) } + .ARM.extab 0 : { *(.ARM.extab) } + .ARM.exidx 0 : { *(.ARM.exidx) } + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame 0 : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges 0 : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + /* Exception handling */ + .eh_frame 0 : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table 0 : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges 0 : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata 0 : { *(.tdata) } + .tbss 0 : { *(.tbss) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + .preinit_array 0 : + { + KEEP (*(.preinit_array)) + } + .jcr 0 : { KEEP (*(.jcr)) } + .dynamic 0 : { *(.dynamic) } + .got 0 : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + .data 0 : + { + *(.data) + SORT(CONSTRUCTORS) + } + .data1 0 : { *(.data1) } + .bss 0 : + { + *(.dynbss) + *(.bss) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + } + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xw b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xw new file mode 100644 index 0000000..f9550bf --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib/ldscripts/armelfb_linux_eabi.xw
@@ -0,0 +1,244 @@ +/* Script for -z combreloc -z now -z relro: combine and sort reloc sections */ +/* Copyright (C) 2014 Free Software Foundation, Inc. + Copying and distribution of this script, with or without modification, + are permitted in any medium without royalty provided the copyright + notice and this notice are preserved. */ +OUTPUT_FORMAT("elf32-bigarm", "elf32-bigarm", + "elf32-littlearm") +OUTPUT_ARCH(arm) +ENTRY(_start) +SECTIONS +{ + /* Read-only sections, merged into text segment: */ + PROVIDE (__executable_start = 0x00010000); . = 0x00010000 + SIZEOF_HEADERS; + .interp : { *(.interp) } + .note.gnu.build-id : { *(.note.gnu.build-id) } + .hash : { *(.hash) } + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + .rel.dyn : + { + *(.rel.init) + *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) + *(.rel.fini) + *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) + *(.rel.data.rel.ro .rel.data.rel.ro.* .rel.gnu.linkonce.d.rel.ro.*) + *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) + *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) + *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) + *(.rel.ctors) + *(.rel.dtors) + *(.rel.got) + *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rel_iplt_start = .); + *(.rel.iplt) + PROVIDE_HIDDEN (__rel_iplt_end = .); + } + .rela.dyn : + { + *(.rela.init) + *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) + *(.rela.fini) + *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) + *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) + *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) + *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) + *(.rela.ctors) + *(.rela.dtors) + *(.rela.got) + *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) + PROVIDE_HIDDEN (__rela_iplt_start = .); + *(.rela.iplt) + PROVIDE_HIDDEN (__rela_iplt_end = .); + } + .rel.plt : + { + *(.rel.plt) + } + .rela.plt : + { + *(.rela.plt) + } + .init : + { + KEEP (*(SORT_NONE(.init))) + } + .plt : { *(.plt) } + .iplt : { *(.iplt) } + .text : + { + *(.text.unlikely .text.*_unlikely .text.unlikely.*) + *(.text.exit .text.exit.*) + *(.text.startup .text.startup.*) + *(.text.hot .text.hot.*) + *(.text .stub .text.* .gnu.linkonce.t.*) + /* .gnu.warning sections are handled specially by elf32.em. */ + *(.gnu.warning) + *(.glue_7t) *(.glue_7) *(.vfp11_veneer) *(.v4_bx) + } + .fini : + { + KEEP (*(SORT_NONE(.fini))) + } + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + .rodata : { *(.rodata .rodata.* .gnu.linkonce.r.*) } + .rodata1 : { *(.rodata1) } + .ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) } + PROVIDE_HIDDEN (__exidx_start = .); + .ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) } + PROVIDE_HIDDEN (__exidx_end = .); + .eh_frame_hdr : { *(.eh_frame_hdr) } + .eh_frame : ONLY_IF_RO { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RO { *(.gcc_except_table + .gcc_except_table.*) } + /* These sections are generated by the Sun/Oracle C++ compiler. */ + .exception_ranges : ONLY_IF_RO { *(.exception_ranges + .exception_ranges*) } + /* Adjust the address for the data segment. For 32 bits we want to align + at exactly a page boundary to make life easier for apriori. */ + . = ALIGN (CONSTANT (MAXPAGESIZE)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE)); + /* Exception handling */ + .eh_frame : ONLY_IF_RW { KEEP (*(.eh_frame)) } + .gcc_except_table : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) } + .exception_ranges : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) } + /* Thread Local Storage sections */ + .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) } + .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) } + /* Ensure the __preinit_array_start label is properly aligned. We + could instead move the label definition inside the section, but + the linker would then create the section even if it turns out to + be empty, which isn't pretty. */ + . = ALIGN(32 / 8); + PROVIDE_HIDDEN (__preinit_array_start = .); + .preinit_array : + { + KEEP (*(.preinit_array)) + } + PROVIDE_HIDDEN (__preinit_array_end = .); + PROVIDE_HIDDEN (__init_array_start = .); + .init_array : + { + KEEP (*crtbegin*.o(.init_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*))) + KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .ctors)) + } + PROVIDE_HIDDEN (__init_array_end = .); + PROVIDE_HIDDEN (__fini_array_start = .); + .fini_array : + { + KEEP (*crtbegin*.o(.fini_array)) + KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*))) + KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin*.o *crtend.o *crtend*.o ) .dtors)) + } + PROVIDE_HIDDEN (__fini_array_end = .); + .ctors : + { + /* gcc uses crtbegin.o to find the start of + the constructors, so we make sure it is + first. Because this is a wildcard, it + doesn't matter if the user does not + actually link against crtbegin.o; the + linker won't look for a file to match a + wildcard. The wildcard also means that it + doesn't matter which directory crtbegin.o + is in. */ + KEEP (*crtbegin.o(.ctors)) + KEEP (*crtbegin*.o(.ctors)) + /* We don't want to include the .ctor section from + the crtend.o file until after the sorted ctors. + The .ctor section from the crtend file contains the + end of ctors marker and it must be last */ + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .ctors)) + KEEP (*(SORT(.ctors.*))) + KEEP (*(.ctors)) + } + .dtors : + { + KEEP (*crtbegin.o(.dtors)) + KEEP (*crtbegin*.o(.dtors)) + KEEP (*(EXCLUDE_FILE (*crtend.o *crtend*.o ) .dtors)) + KEEP (*(SORT(.dtors.*))) + KEEP (*(.dtors)) + } + .jcr : { KEEP (*(.jcr)) } + .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) } + .dynamic : { *(.dynamic) } + .got : { *(.got.plt) *(.igot.plt) *(.got) *(.igot) } + . = DATA_SEGMENT_RELRO_END (0, .); + .data : + { + PROVIDE (__data_start = .); + *(.data .data.* .gnu.linkonce.d.*) + SORT(CONSTRUCTORS) + } + .data1 : { *(.data1) } + _edata = .; PROVIDE (edata = .); + __bss_start = .; + __bss_start__ = .; + .bss : + { + *(.dynbss) + *(.bss .bss.* .gnu.linkonce.b.*) + *(COMMON) + /* Align here to ensure that the .bss section occupies space up to + _end. Align after .bss to ensure correct alignment even if the + .bss section disappears because there are no input sections. */ + . = ALIGN(32 / 8); + } + _bss_end__ = . ; __bss_end__ = . ; + . = ALIGN(32 / 8); + . = SEGMENT_START("ldata-segment", .); + . = ALIGN(32 / 8); + __end__ = . ; + _end = .; + _bss_end__ = . ; __bss_end__ = . ; __end__ = . ; + PROVIDE (end = .); + . = DATA_SEGMENT_END (.); + /* Stabs debugging sections. */ + .stab 0 : { *(.stab) } + .stabstr 0 : { *(.stabstr) } + .stab.excl 0 : { *(.stab.excl) } + .stab.exclstr 0 : { *(.stab.exclstr) } + .stab.index 0 : { *(.stab.index) } + .stab.indexstr 0 : { *(.stab.indexstr) } + .comment 0 : { *(.comment) } + /* DWARF debug sections. + Symbols in the DWARF debugging sections are relative to the beginning + of the section so we begin them at 0. */ + /* DWARF 1 */ + .debug 0 : { *(.debug) } + .line 0 : { *(.line) } + /* GNU DWARF 1 extensions */ + .debug_srcinfo 0 : { *(.debug_srcinfo) } + .debug_sfnames 0 : { *(.debug_sfnames) } + /* DWARF 1.1 and DWARF 2 */ + .debug_aranges 0 : { *(.debug_aranges) } + .debug_pubnames 0 : { *(.debug_pubnames) } + /* DWARF 2 */ + .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) } + .debug_abbrev 0 : { *(.debug_abbrev) } + .debug_line 0 : { *(.debug_line .debug_line.* .debug_line_end ) } + .debug_frame 0 : { *(.debug_frame) } + .debug_str 0 : { *(.debug_str) } + .debug_loc 0 : { *(.debug_loc) } + .debug_macinfo 0 : { *(.debug_macinfo) } + /* SGI/MIPS DWARF 2 extensions */ + .debug_weaknames 0 : { *(.debug_weaknames) } + .debug_funcnames 0 : { *(.debug_funcnames) } + .debug_typenames 0 : { *(.debug_typenames) } + .debug_varnames 0 : { *(.debug_varnames) } + /* DWARF 3 */ + .debug_pubtypes 0 : { *(.debug_pubtypes) } + .debug_ranges 0 : { *(.debug_ranges) } + /* DWARF Extension. */ + .debug_macro 0 : { *(.debug_macro) } + .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) } + .note.gnu.arm.ident 0 : { KEEP (*(.note.gnu.arm.ident)) } + /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) *(.mdebug.*) } +}
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib64/libatomic.a b/aarch64-linux-android-4.9/aarch64-linux-android/lib64/libatomic.a new file mode 100644 index 0000000..1dba43c --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib64/libatomic.a Binary files differ
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib64/libgomp.a b/aarch64-linux-android-4.9/aarch64-linux-android/lib64/libgomp.a new file mode 100644 index 0000000..efb0fa7 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib64/libgomp.a Binary files differ
diff --git a/aarch64-linux-android-4.9/aarch64-linux-android/lib64/libgomp.spec b/aarch64-linux-android-4.9/aarch64-linux-android/lib64/libgomp.spec new file mode 100644 index 0000000..2fd7721 --- /dev/null +++ b/aarch64-linux-android-4.9/aarch64-linux-android/lib64/libgomp.spec
@@ -0,0 +1,3 @@ +# This spec file is read by gcc when linking. It is used to specify the +# standard libraries we need in order to link with libgomp. +*link_gomp: -lgomp
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-addr2line b/aarch64-linux-android-4.9/bin/aarch64-linux-android-addr2line new file mode 100755 index 0000000..b21414f --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-addr2line Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-ar b/aarch64-linux-android-4.9/bin/aarch64-linux-android-ar new file mode 100755 index 0000000..c415ebd --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-ar Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-as b/aarch64-linux-android-4.9/bin/aarch64-linux-android-as new file mode 100755 index 0000000..3d5f60f --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-as Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-c++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-c++ new file mode 120000 index 0000000..348d40b --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-c++
@@ -0,0 +1 @@ +aarch64-linux-android-g++ \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-c++filt b/aarch64-linux-android-4.9/bin/aarch64-linux-android-c++filt new file mode 100755 index 0000000..c1df846 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-c++filt Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-cpp b/aarch64-linux-android-4.9/bin/aarch64-linux-android-cpp new file mode 100755 index 0000000..5a066e5 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-cpp Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-dwp b/aarch64-linux-android-4.9/bin/aarch64-linux-android-dwp new file mode 100755 index 0000000..8a2b8c1 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-dwp Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-elfedit b/aarch64-linux-android-4.9/bin/aarch64-linux-android-elfedit new file mode 100755 index 0000000..054067b --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-elfedit Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-g++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-g++ new file mode 100755 index 0000000..c7045a4 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-g++ Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcc b/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcc new file mode 100755 index 0000000..1da6739 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcc Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcc-4.9 b/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcc-4.9 new file mode 120000 index 0000000..347842f --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcc-4.9
@@ -0,0 +1 @@ +aarch64-linux-android-gcc \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcc-4.9.x b/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcc-4.9.x new file mode 100755 index 0000000..1da6739 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcc-4.9.x Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcc-ar b/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcc-ar new file mode 100755 index 0000000..3fd989f --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcc-ar Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcc-nm b/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcc-nm new file mode 100755 index 0000000..49aa92b --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcc-nm Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcc-ranlib b/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcc-ranlib new file mode 100755 index 0000000..e55dc59 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcc-ranlib Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcov b/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcov new file mode 100755 index 0000000..281a629 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcov Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcov-tool b/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcov-tool new file mode 100755 index 0000000..e2a3ae5 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-gcov-tool Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-gprof b/aarch64-linux-android-4.9/bin/aarch64-linux-android-gprof new file mode 100755 index 0000000..28609bc --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-gprof Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-ld b/aarch64-linux-android-4.9/bin/aarch64-linux-android-ld new file mode 120000 index 0000000..d740875 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-ld
@@ -0,0 +1 @@ +aarch64-linux-android-ld.bfd \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-ld.bfd b/aarch64-linux-android-4.9/bin/aarch64-linux-android-ld.bfd new file mode 100755 index 0000000..ea9c0c0 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-ld.bfd Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-ld.gold b/aarch64-linux-android-4.9/bin/aarch64-linux-android-ld.gold new file mode 100755 index 0000000..6e48409 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-ld.gold Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-nm b/aarch64-linux-android-4.9/bin/aarch64-linux-android-nm new file mode 100755 index 0000000..5df914c --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-nm Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-objcopy b/aarch64-linux-android-4.9/bin/aarch64-linux-android-objcopy new file mode 100755 index 0000000..db82485 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-objcopy Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-objdump b/aarch64-linux-android-4.9/bin/aarch64-linux-android-objdump new file mode 100755 index 0000000..b68a76c --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-objdump Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-ranlib b/aarch64-linux-android-4.9/bin/aarch64-linux-android-ranlib new file mode 100755 index 0000000..d12e936 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-ranlib Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-readelf b/aarch64-linux-android-4.9/bin/aarch64-linux-android-readelf new file mode 100755 index 0000000..3c25ddc --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-readelf Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-size b/aarch64-linux-android-4.9/bin/aarch64-linux-android-size new file mode 100755 index 0000000..aad6605 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-size Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-strings b/aarch64-linux-android-4.9/bin/aarch64-linux-android-strings new file mode 100755 index 0000000..8c65f83 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-strings Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-android-strip b/aarch64-linux-android-4.9/bin/aarch64-linux-android-strip new file mode 100755 index 0000000..687b852 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-android-strip Binary files differ
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-ar b/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-ar new file mode 120000 index 0000000..b422de2 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-ar
@@ -0,0 +1 @@ +aarch64-linux-android-ar \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-as b/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-as new file mode 120000 index 0000000..73b56dd --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-as
@@ -0,0 +1 @@ +aarch64-linux-android-as \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-cpp b/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-cpp new file mode 120000 index 0000000..7057f43 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-cpp
@@ -0,0 +1 @@ +aarch64-linux-android-cpp \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-gcc b/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-gcc new file mode 120000 index 0000000..347842f --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-gcc
@@ -0,0 +1 @@ +aarch64-linux-android-gcc \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-ld b/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-ld new file mode 120000 index 0000000..d740875 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-ld
@@ -0,0 +1 @@ +aarch64-linux-android-ld.bfd \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-nm b/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-nm new file mode 120000 index 0000000..a78935d --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-nm
@@ -0,0 +1 @@ +aarch64-linux-android-nm \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-objcopy b/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-objcopy new file mode 120000 index 0000000..ced242d --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-objcopy
@@ -0,0 +1 @@ +aarch64-linux-android-objcopy \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-objdump b/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-objdump new file mode 120000 index 0000000..940df24 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-objdump
@@ -0,0 +1 @@ +aarch64-linux-android-objdump \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-size b/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-size new file mode 120000 index 0000000..3be6243 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-size
@@ -0,0 +1 @@ +aarch64-linux-android-size \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-strip b/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-strip new file mode 120000 index 0000000..98f9e52 --- /dev/null +++ b/aarch64-linux-android-4.9/bin/aarch64-linux-androidkernel-strip
@@ -0,0 +1 @@ +aarch64-linux-android-strip \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/crtbegin.o b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/crtbegin.o new file mode 100644 index 0000000..f0a9320 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/crtbegin.o Binary files differ
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/crtbeginS.o b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/crtbeginS.o new file mode 100644 index 0000000..e69fad8 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/crtbeginS.o Binary files differ
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/crtbeginT.o b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/crtbeginT.o new file mode 100644 index 0000000..f0a9320 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/crtbeginT.o Binary files differ
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/crtend.o b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/crtend.o new file mode 100644 index 0000000..d4350c9 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/crtend.o Binary files differ
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/crtendS.o b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/crtendS.o new file mode 100644 index 0000000..d4350c9 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/crtendS.o Binary files differ
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/gcov-counter.def b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/gcov-counter.def new file mode 100644 index 0000000..e847f05 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/gcov-counter.def
@@ -0,0 +1,60 @@ +/* Definitions for the gcov counters in the GNU compiler. + Copyright (C) 2001-2014 Free Software Foundation, Inc. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify it under +the terms of the GNU General Public License as published by the Free +Software Foundation; either version 3, or (at your option) any later +version. + +GCC is distributed in the hope that it will be useful, but WITHOUT ANY +WARRANTY; without even the implied warranty of MERCHANTABILITY or +FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License +for more details. + +You should have received a copy of the GNU General Public License +along with GCC; see the file COPYING3. If not see +<http://www.gnu.org/licenses/>. */ + +/* Before including this file, define a macro: + + DEF_GCOV_COUNTER(COUNTER, NAME, FN_TYPE) + + This macro will be expanded to all supported gcov counters, their + names, or the type of handler functions. FN_TYPE will be + expanded to a handler function, like in gcov_merge, it is + expanded to __gcov_merge ## FN_TYPE. */ + +/* Arc transitions. */ +DEF_GCOV_COUNTER(GCOV_COUNTER_ARCS, "arcs", _add) + +/* Histogram of value inside an interval. */ +DEF_GCOV_COUNTER(GCOV_COUNTER_V_INTERVAL, "interval", _add) + +/* Histogram of exact power2 logarithm of a value. */ +DEF_GCOV_COUNTER(GCOV_COUNTER_V_POW2, "pow2", _add) + +/* The most common value of expression. */ +DEF_GCOV_COUNTER(GCOV_COUNTER_V_SINGLE, "single", _single) + +/* The most common difference between consecutive values of expression. */ +DEF_GCOV_COUNTER(GCOV_COUNTER_V_DELTA, "delta", _delta) + +/* The most common indirect address. */ +DEF_GCOV_COUNTER(GCOV_COUNTER_V_INDIR, "indirect_call", _single) + +/* Compute average value passed to the counter. */ +DEF_GCOV_COUNTER(GCOV_COUNTER_AVERAGE, "average", _add) + +/* IOR of the all values passed to counter. */ +DEF_GCOV_COUNTER(GCOV_COUNTER_IOR, "ior", _ior) + +/* Top N value tracking for indirect calls */ +DEF_GCOV_COUNTER(GCOV_COUNTER_ICALL_TOPNV, "indirect_call_topn", _icall_topn) + +/* Time profile collecting first run of a function */ +DEF_GCOV_COUNTER(GCOV_TIME_PROFILER, "time_profiler", _time_profile) + +/* Top N value tracking for indirect calls */ +DEF_GCOV_COUNTER(GCOV_COUNTER_DIRECT_CALL, "direct_call", _dc)
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/gcov-io.c b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/gcov-io.c new file mode 100644 index 0000000..fc5e32e --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/gcov-io.c
@@ -0,0 +1,1233 @@ +/* File format for coverage information + Copyright (C) 1996-2014 Free Software Foundation, Inc. + Contributed by Bob Manson <manson@cygnus.com>. + Completely remangled by Nathan Sidwell <nathan@codesourcery.com>. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify it under +the terms of the GNU General Public License as published by the Free +Software Foundation; either version 3, or (at your option) any later +version. + +GCC is distributed in the hope that it will be useful, but WITHOUT ANY +WARRANTY; without even the implied warranty of MERCHANTABILITY or +FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License +for more details. + +Under Section 7 of GPL version 3, you are granted additional +permissions described in the GCC Runtime Library Exception, version +3.1, as published by the Free Software Foundation. + +You should have received a copy of the GNU General Public License and +a copy of the GCC Runtime Library Exception along with this program; +see the files COPYING3 and COPYING.RUNTIME respectively. If not, see +<http://www.gnu.org/licenses/>. */ + +/* Routines declared in gcov-io.h. This file should be #included by + another source file, after having #included gcov-io.h. */ + +#if !IN_GCOV +static void gcov_write_block (unsigned); +static gcov_unsigned_t *gcov_write_words (unsigned); +#endif +static const gcov_unsigned_t *gcov_read_words (unsigned); +#if !IN_LIBGCOV +static void gcov_allocate (unsigned); +#endif + +/* Optimum number of gcov_unsigned_t's read from or written to disk. */ +#define GCOV_BLOCK_SIZE (1 << 10) + +GCOV_LINKAGE struct gcov_var +{ + _GCOV_FILE *file; + gcov_position_t start; /* Position of first byte of block */ + unsigned offset; /* Read/write position within the block. */ + unsigned length; /* Read limit in the block. */ + unsigned overread; /* Number of words overread. */ + int error; /* < 0 overflow, > 0 disk error. */ + int mode; /* < 0 writing, > 0 reading */ +#if IN_LIBGCOV + /* Holds one block plus 4 bytes, thus all coverage reads & writes + fit within this buffer and we always can transfer GCOV_BLOCK_SIZE + to and from the disk. libgcov never backtracks and only writes 4 + or 8 byte objects. */ + gcov_unsigned_t buffer[GCOV_BLOCK_SIZE + 1]; +#else + int endian; /* Swap endianness. */ + /* Holds a variable length block, as the compiler can write + strings and needs to backtrack. */ + size_t alloc; + gcov_unsigned_t *buffer; +#endif +} gcov_var; + +/* Save the current position in the gcov file. */ +/* We need to expose this function when compiling for gcov-tool. */ +#ifndef IN_GCOV_TOOL +static inline +#endif +gcov_position_t +gcov_position (void) +{ + return gcov_var.start + gcov_var.offset; +} + +/* Return nonzero if the error flag is set. */ +/* We need to expose this function when compiling for gcov-tool. */ +#ifndef IN_GCOV_TOOL +static inline +#endif +int +gcov_is_error (void) +{ + return gcov_var.file ? gcov_var.error : 1; +} + +#if IN_LIBGCOV +/* Move to beginning of file and initialize for writing. */ +GCOV_LINKAGE inline void +gcov_rewrite (void) +{ + gcc_assert (gcov_var.mode > 0); + gcov_var.mode = -1; + gcov_var.start = 0; + gcov_var.offset = 0; + _GCOV_fseek (gcov_var.file, 0L, SEEK_SET); +} +#endif + +static inline gcov_unsigned_t from_file (gcov_unsigned_t value) +{ +#if !IN_LIBGCOV + if (gcov_var.endian) + { + value = (value >> 16) | (value << 16); + value = ((value & 0xff00ff) << 8) | ((value >> 8) & 0xff00ff); + } +#endif + return value; +} + +/* Open a gcov file. NAME is the name of the file to open and MODE + indicates whether a new file should be created, or an existing file + opened. If MODE is >= 0 an existing file will be opened, if + possible, and if MODE is <= 0, a new file will be created. Use + MODE=0 to attempt to reopen an existing file and then fall back on + creating a new one. If MODE < 0, the file will be opened in + read-only mode. Otherwise it will be opened for modification. + Return zero on failure, >0 on opening an existing file and <0 on + creating a new one. */ + +#ifndef __KERNEL__ +GCOV_LINKAGE int +#if IN_LIBGCOV +gcov_open (const char *name) +#else +gcov_open (const char *name, int mode) +#endif +{ +#if IN_LIBGCOV + const int mode = 0; +#endif +#if GCOV_LOCKED + struct flock s_flock; + int fd; + + s_flock.l_whence = SEEK_SET; + s_flock.l_start = 0; + s_flock.l_len = 0; /* Until EOF. */ + s_flock.l_pid = getpid (); +#endif + + gcc_assert (!gcov_var.file); + gcov_var.start = 0; + gcov_var.offset = gcov_var.length = 0; + gcov_var.overread = -1u; + gcov_var.error = 0; +#if !IN_LIBGCOV + gcov_var.endian = 0; +#endif +#if GCOV_LOCKED + if (mode > 0) + { + /* Read-only mode - acquire a read-lock. */ + s_flock.l_type = F_RDLCK; + /* pass mode (ignored) for compatibility */ + fd = open (name, O_RDONLY, S_IRUSR | S_IWUSR); + } + else if (mode < 0) + { + /* Write mode - acquire a write-lock. */ + s_flock.l_type = F_WRLCK; + fd = open (name, O_RDWR | O_CREAT | O_TRUNC, 0666); + } + else /* mode == 0 */ + { + /* Read-Write mode - acquire a write-lock. */ + s_flock.l_type = F_WRLCK; + fd = open (name, O_RDWR | O_CREAT, 0666); + } + if (fd < 0) + return 0; + + while (fcntl (fd, F_SETLKW, &s_flock) && errno == EINTR) + continue; + + gcov_var.file = fdopen (fd, (mode > 0) ? "rb" : "r+b"); + + if (!gcov_var.file) + { + close (fd); + return 0; + } + + if (mode > 0) + gcov_var.mode = 1; + else if (mode == 0) + { + struct stat st; + + if (fstat (fd, &st) < 0) + { + _GCOV_fclose (gcov_var.file); + gcov_var.file = 0; + return 0; + } + if (st.st_size != 0) + gcov_var.mode = 1; + else + gcov_var.mode = mode * 2 + 1; + } + else + gcov_var.mode = mode * 2 + 1; +#else + if (mode >= 0) + gcov_var.file = _GCOV_fopen (name, (mode > 0) ? "rb" : "r+b"); + + if (gcov_var.file) + gcov_var.mode = 1; + else if (mode <= 0) + { + gcov_var.file = _GCOV_fopen (name, "w+b"); + if (gcov_var.file) + gcov_var.mode = mode * 2 + 1; + } + if (!gcov_var.file) + return 0; +#endif + + setbuf (gcov_var.file, (char *)0); + + return 1; +} +#else /* __KERNEL__ */ + +extern _GCOV_FILE *gcov_current_file; + +GCOV_LINKAGE int +gcov_open (const char *name) +{ + gcov_var.start = 0; + gcov_var.offset = gcov_var.length = 0; + gcov_var.overread = -1u; + gcov_var.error = 0; + gcov_var.file = gcov_current_file; + gcov_var.mode = 1; + + return 1; +} +#endif /* __KERNEL__ */ + + +/* Close the current gcov file. Flushes data to disk. Returns nonzero + on failure or error flag set. */ + +GCOV_LINKAGE int +gcov_close (void) +{ + if (gcov_var.file) + { +#if !IN_GCOV + if (gcov_var.offset && gcov_var.mode < 0) + gcov_write_block (gcov_var.offset); +#endif + _GCOV_fclose (gcov_var.file); + gcov_var.file = 0; + gcov_var.length = 0; + } +#if !IN_LIBGCOV + free (gcov_var.buffer); + gcov_var.alloc = 0; + gcov_var.buffer = 0; +#endif + gcov_var.mode = 0; + return gcov_var.error; +} + +#if !IN_LIBGCOV +/* Check if MAGIC is EXPECTED. Use it to determine endianness of the + file. Returns +1 for same endian, -1 for other endian and zero for + not EXPECTED. */ + +GCOV_LINKAGE int +gcov_magic (gcov_unsigned_t magic, gcov_unsigned_t expected) +{ + if (magic == expected) + return 1; + magic = (magic >> 16) | (magic << 16); + magic = ((magic & 0xff00ff) << 8) | ((magic >> 8) & 0xff00ff); + if (magic == expected) + { + gcov_var.endian = 1; + return -1; + } + return 0; +} +#endif + +#if !IN_LIBGCOV +static void +gcov_allocate (unsigned length) +{ + size_t new_size = gcov_var.alloc; + + if (!new_size) + new_size = GCOV_BLOCK_SIZE; + new_size += length; + new_size *= 2; + + gcov_var.alloc = new_size; + gcov_var.buffer = XRESIZEVAR (gcov_unsigned_t, gcov_var.buffer, new_size << 2); +} +#endif + +#if !IN_GCOV +/* Write out the current block, if needs be. */ + +static void +gcov_write_block (unsigned size) +{ + if (_GCOV_fwrite (gcov_var.buffer, size << 2, 1, gcov_var.file) != 1) + gcov_var.error = 1; + gcov_var.start += size; + gcov_var.offset -= size; +} + +/* Allocate space to write BYTES bytes to the gcov file. Return a + pointer to those bytes, or NULL on failure. */ + +static gcov_unsigned_t * +gcov_write_words (unsigned words) +{ + gcov_unsigned_t *result; + + gcc_assert (gcov_var.mode < 0); +#if IN_LIBGCOV + if (gcov_var.offset >= GCOV_BLOCK_SIZE) + { + gcov_write_block (GCOV_BLOCK_SIZE); + if (gcov_var.offset) + { + gcc_assert (gcov_var.offset == 1); + memcpy (gcov_var.buffer, gcov_var.buffer + GCOV_BLOCK_SIZE, 4); + } + } +#else + if (gcov_var.offset + words > gcov_var.alloc) + gcov_allocate (gcov_var.offset + words); +#endif + result = &gcov_var.buffer[gcov_var.offset]; + gcov_var.offset += words; + + return result; +} + +/* Write unsigned VALUE to coverage file. Sets error flag + appropriately. */ + +GCOV_LINKAGE void +gcov_write_unsigned (gcov_unsigned_t value) +{ + gcov_unsigned_t *buffer = gcov_write_words (1); + + buffer[0] = value; +} + +/* Compute the total length in words required to write NUM_STRINGS + in STRING_ARRAY as unsigned. */ + +GCOV_LINKAGE gcov_unsigned_t +gcov_compute_string_array_len (char **string_array, + gcov_unsigned_t num_strings) +{ + gcov_unsigned_t len = 0, i; + for (i = 0; i < num_strings; i++) + { + gcov_unsigned_t string_len + = (strlen (string_array[i]) + sizeof (gcov_unsigned_t)) + / sizeof (gcov_unsigned_t); + len += string_len; + len += 1; /* Each string is lead by a length. */ + } + return len; +} + +/* Write NUM_STRINGS in STRING_ARRAY as unsigned. */ + +GCOV_LINKAGE void +gcov_write_string_array (char **string_array, gcov_unsigned_t num_strings) +{ + gcov_unsigned_t i, j; + for (j = 0; j < num_strings; j++) + { + gcov_unsigned_t *aligned_string; + gcov_unsigned_t string_len = + (strlen (string_array[j]) + sizeof (gcov_unsigned_t)) / + sizeof (gcov_unsigned_t); + aligned_string = (gcov_unsigned_t *) + alloca ((string_len + 1) * sizeof (gcov_unsigned_t)); + memset (aligned_string, 0, (string_len + 1) * sizeof (gcov_unsigned_t)); + aligned_string[0] = string_len; + strcpy ((char*) (aligned_string + 1), string_array[j]); + for (i = 0; i < (string_len + 1); i++) + gcov_write_unsigned (aligned_string[i]); + } +} + +/* Write counter VALUE to coverage file. Sets error flag + appropriately. */ + +#if IN_LIBGCOV +GCOV_LINKAGE void +gcov_write_counter (gcov_type value) +{ + gcov_unsigned_t *buffer = gcov_write_words (2); + + buffer[0] = (gcov_unsigned_t) value; + if (sizeof (value) > sizeof (gcov_unsigned_t)) + buffer[1] = (gcov_unsigned_t) (value >> 32); + else + buffer[1] = 0; +} +#endif /* IN_LIBGCOV */ + +#if !IN_LIBGCOV +/* Write STRING to coverage file. Sets error flag on file + error, overflow flag on overflow */ + +GCOV_LINKAGE void +gcov_write_string (const char *string) +{ + unsigned length = 0; + unsigned alloc = 0; + gcov_unsigned_t *buffer; + + if (string) + { + length = strlen (string); + alloc = (length + 4) >> 2; + } + + buffer = gcov_write_words (1 + alloc); + + buffer[0] = alloc; + buffer[alloc] = 0; + memcpy (&buffer[1], string, length); +} +#endif + +#if !IN_LIBGCOV +/* Write a tag TAG and reserve space for the record length. Return a + value to be used for gcov_write_length. */ + +GCOV_LINKAGE gcov_position_t +gcov_write_tag (gcov_unsigned_t tag) +{ + gcov_position_t result = gcov_var.start + gcov_var.offset; + gcov_unsigned_t *buffer = gcov_write_words (2); + + buffer[0] = tag; + buffer[1] = 0; + + return result; +} + +/* Write a record length using POSITION, which was returned by + gcov_write_tag. The current file position is the end of the + record, and is restored before returning. Returns nonzero on + overflow. */ + +GCOV_LINKAGE void +gcov_write_length (gcov_position_t position) +{ + unsigned offset; + gcov_unsigned_t length; + gcov_unsigned_t *buffer; + + gcc_assert (gcov_var.mode < 0); + gcc_assert (position + 2 <= gcov_var.start + gcov_var.offset); + gcc_assert (position >= gcov_var.start); + offset = position - gcov_var.start; + length = gcov_var.offset - offset - 2; + buffer = (gcov_unsigned_t *) &gcov_var.buffer[offset]; + buffer[1] = length; + if (gcov_var.offset >= GCOV_BLOCK_SIZE) + gcov_write_block (gcov_var.offset); +} + +#else /* IN_LIBGCOV */ + +/* Write a tag TAG and length LENGTH. */ + +GCOV_LINKAGE void +gcov_write_tag_length (gcov_unsigned_t tag, gcov_unsigned_t length) +{ + gcov_unsigned_t *buffer = gcov_write_words (2); + + buffer[0] = tag; + buffer[1] = length; +} + +/* Write a summary structure to the gcov file. Return nonzero on + overflow. */ + +GCOV_LINKAGE void +gcov_write_summary (gcov_unsigned_t tag, const struct gcov_summary *summary) +{ + unsigned ix, h_ix, bv_ix, h_cnt = 0; + const struct gcov_ctr_summary *csum; + unsigned histo_bitvector[GCOV_HISTOGRAM_BITVECTOR_SIZE]; + + /* Count number of non-zero histogram entries, and fill in a bit vector + of non-zero indices. The histogram is only currently computed for arc + counters. */ + for (bv_ix = 0; bv_ix < GCOV_HISTOGRAM_BITVECTOR_SIZE; bv_ix++) + histo_bitvector[bv_ix] = 0; + csum = &summary->ctrs[GCOV_COUNTER_ARCS]; + for (h_ix = 0; h_ix < GCOV_HISTOGRAM_SIZE; h_ix++) + { + if (csum->histogram[h_ix].num_counters > 0) + { + histo_bitvector[h_ix / 32] |= 1 << (h_ix % 32); + h_cnt++; + } + } + gcov_write_tag_length (tag, GCOV_TAG_SUMMARY_LENGTH (h_cnt)); + gcov_write_unsigned (summary->checksum); + for (csum = summary->ctrs, ix = GCOV_COUNTERS_SUMMABLE; ix--; csum++) + { + gcov_write_unsigned (csum->num); + gcov_write_unsigned (csum->runs); + gcov_write_counter (csum->sum_all); + gcov_write_counter (csum->run_max); + gcov_write_counter (csum->sum_max); + if (ix != GCOV_COUNTER_ARCS) + { + for (bv_ix = 0; bv_ix < GCOV_HISTOGRAM_BITVECTOR_SIZE; bv_ix++) + gcov_write_unsigned (0); + continue; + } + for (bv_ix = 0; bv_ix < GCOV_HISTOGRAM_BITVECTOR_SIZE; bv_ix++) + gcov_write_unsigned (histo_bitvector[bv_ix]); + for (h_ix = 0; h_ix < GCOV_HISTOGRAM_SIZE; h_ix++) + { + if (!csum->histogram[h_ix].num_counters) + continue; + gcov_write_unsigned (csum->histogram[h_ix].num_counters); + gcov_write_counter (csum->histogram[h_ix].min_value); + gcov_write_counter (csum->histogram[h_ix].cum_value); + } + } +} +#endif /* IN_LIBGCOV */ + +#endif /*!IN_GCOV */ + +/* Return a pointer to read BYTES bytes from the gcov file. Returns + NULL on failure (read past EOF). */ + +static const gcov_unsigned_t * +gcov_read_words (unsigned words) +{ + const gcov_unsigned_t *result; + unsigned excess = gcov_var.length - gcov_var.offset; + + gcc_assert (gcov_var.mode > 0); + if (excess < words) + { + gcov_var.start += gcov_var.offset; +#if IN_LIBGCOV + if (excess) + { + gcc_assert (excess == 1); + memcpy (gcov_var.buffer, gcov_var.buffer + gcov_var.offset, 4); + } +#else + memmove (gcov_var.buffer, gcov_var.buffer + gcov_var.offset, excess * 4); +#endif + gcov_var.offset = 0; + gcov_var.length = excess; +#if IN_LIBGCOV + gcc_assert (!gcov_var.length || gcov_var.length == 1); + excess = GCOV_BLOCK_SIZE; +#else + if (gcov_var.length + words > gcov_var.alloc) + gcov_allocate (gcov_var.length + words); + excess = gcov_var.alloc - gcov_var.length; +#endif + excess = _GCOV_fread (gcov_var.buffer + gcov_var.length, + 1, excess << 2, gcov_var.file) >> 2; + gcov_var.length += excess; + if (gcov_var.length < words) + { + gcov_var.overread += words - gcov_var.length; + gcov_var.length = 0; + return 0; + } + } + result = &gcov_var.buffer[gcov_var.offset]; + gcov_var.offset += words; + return result; +} + +/* Read unsigned value from a coverage file. Sets error flag on file + error, overflow flag on overflow */ + +GCOV_LINKAGE gcov_unsigned_t +gcov_read_unsigned (void) +{ + gcov_unsigned_t value; + const gcov_unsigned_t *buffer = gcov_read_words (1); + + if (!buffer) + return 0; + value = from_file (buffer[0]); + return value; +} + +/* Read counter value from a coverage file. Sets error flag on file + error, overflow flag on overflow */ + +GCOV_LINKAGE gcov_type +gcov_read_counter (void) +{ + gcov_type value; + const gcov_unsigned_t *buffer = gcov_read_words (2); + + if (!buffer) + return 0; + value = from_file (buffer[0]); + if (sizeof (value) > sizeof (gcov_unsigned_t)) + value |= ((gcov_type) from_file (buffer[1])) << 32; + else if (buffer[1]) + gcov_var.error = -1; + + return value; +} + +/* We need to expose the below function when compiling for gcov-tool. */ + +#if !IN_LIBGCOV || defined (IN_GCOV_TOOL) +/* Read string from coverage file. Returns a pointer to a static + buffer, or NULL on empty string. You must copy the string before + calling another gcov function. */ + +GCOV_LINKAGE const char * +gcov_read_string (void) +{ + unsigned length = gcov_read_unsigned (); + + if (!length) + return 0; + + return (const char *) gcov_read_words (length); +} +#endif + +#ifdef __KERNEL__ +static int +k_popcountll (long long x) +{ + int c = 0; + while (x) + { + c++; + x &= (x-1); + } + return c; +} +#endif + +GCOV_LINKAGE void +gcov_read_summary (struct gcov_summary *summary) +{ + unsigned ix, h_ix, bv_ix, h_cnt = 0; + struct gcov_ctr_summary *csum; + unsigned histo_bitvector[GCOV_HISTOGRAM_BITVECTOR_SIZE]; + unsigned cur_bitvector; + + summary->checksum = gcov_read_unsigned (); + for (csum = summary->ctrs, ix = GCOV_COUNTERS_SUMMABLE; ix--; csum++) + { + csum->num = gcov_read_unsigned (); + csum->runs = gcov_read_unsigned (); + csum->sum_all = gcov_read_counter (); + csum->run_max = gcov_read_counter (); + csum->sum_max = gcov_read_counter (); + memset (csum->histogram, 0, + sizeof (gcov_bucket_type) * GCOV_HISTOGRAM_SIZE); + for (bv_ix = 0; bv_ix < GCOV_HISTOGRAM_BITVECTOR_SIZE; bv_ix++) + { + histo_bitvector[bv_ix] = gcov_read_unsigned (); +#if IN_LIBGCOV + /* When building libgcov we don't include system.h, which includes + hwint.h (where popcount_hwi is declared). However, libgcov.a + is built by the bootstrapped compiler and therefore the builtins + are always available. */ +#ifndef __KERNEL__ + h_cnt += __builtin_popcount (histo_bitvector[bv_ix]); +#else + h_cnt += k_popcountll (histo_bitvector[bv_ix]); +#endif +#else + h_cnt += popcount_hwi (histo_bitvector[bv_ix]); +#endif + } + bv_ix = 0; + h_ix = 0; + cur_bitvector = 0; + while (h_cnt--) + { + /* Find the index corresponding to the next entry we will read in. + First find the next non-zero bitvector and re-initialize + the histogram index accordingly, then right shift and increment + the index until we find a set bit. */ + while (!cur_bitvector) + { + h_ix = bv_ix * 32; + gcc_assert (bv_ix < GCOV_HISTOGRAM_BITVECTOR_SIZE); + cur_bitvector = histo_bitvector[bv_ix++]; + } + while (!(cur_bitvector & 0x1)) + { + h_ix++; + cur_bitvector >>= 1; + } + gcc_assert (h_ix < GCOV_HISTOGRAM_SIZE); + + csum->histogram[h_ix].num_counters = gcov_read_unsigned (); + csum->histogram[h_ix].min_value = gcov_read_counter (); + csum->histogram[h_ix].cum_value = gcov_read_counter (); + /* Shift off the index we are done with and increment to the + corresponding next histogram entry. */ + cur_bitvector >>= 1; + h_ix++; + } + } +} + +/* Read LENGTH words (unsigned type) from a zero profile fixup record with the + number of function flags saved in NUM_FNS. Returns the int flag array, which + should be deallocated by caller, or NULL on error. */ + +GCOV_LINKAGE int * +gcov_read_comdat_zero_fixup (gcov_unsigned_t length, + gcov_unsigned_t *num_fns) +{ +#ifndef __KERNEL__ + unsigned ix, f_ix; + gcov_unsigned_t num = gcov_read_unsigned (); + /* The length consists of 1 word to hold the number of functions, + plus enough 32-bit words to hold 1 bit/function. */ + gcc_assert ((num + 31) / 32 + 1 == length); + int *zero_fixup_flags = (int *) xcalloc (num, sizeof (int)); + for (ix = 0; ix < length - 1; ix++) + { + gcov_unsigned_t bitvector = gcov_read_unsigned (); + f_ix = ix * 32; + while (bitvector) + { + if (bitvector & 0x1) + zero_fixup_flags[f_ix] = 1; + f_ix++; + bitvector >>= 1; + } + } + *num_fns = num; + return zero_fixup_flags; +#else + return NULL; +#endif +} + +/* Read NUM_STRINGS strings (as an unsigned array) in STRING_ARRAY, and return + the number of words read. */ + +GCOV_LINKAGE gcov_unsigned_t +gcov_read_string_array (char **string_array, gcov_unsigned_t num_strings) +{ + gcov_unsigned_t i, j, len = 0; + + for (j = 0; j < num_strings; j++) + { + gcov_unsigned_t string_len = gcov_read_unsigned (); + string_array[j] = + (char *) xmalloc (string_len * sizeof (gcov_unsigned_t)); + for (i = 0; i < string_len; i++) + ((gcov_unsigned_t *) string_array[j])[i] = gcov_read_unsigned (); + len += (string_len + 1); + } + return len; +} + +/* Read LENGTH words (unsigned type) from a build info record with the number + of strings read saved in NUM_STRINGS. Returns the string array, which + should be deallocated by caller, or NULL on error. */ + +GCOV_LINKAGE char ** +gcov_read_build_info (gcov_unsigned_t length, gcov_unsigned_t *num_strings) +{ + gcov_unsigned_t num = gcov_read_unsigned (); + char **build_info_strings = (char **) + xmalloc (sizeof (char *) * num); + gcov_unsigned_t len = gcov_read_string_array (build_info_strings, + num); + if (len != length - 1) + return NULL; + *num_strings = num; + return build_info_strings; +} + +#if (!IN_LIBGCOV && IN_GCOV != 1) || defined (IN_GCOV_TOOL) +/* Read LEN words (unsigned type) and construct MOD_INFO. */ + +GCOV_LINKAGE void +gcov_read_module_info (struct gcov_module_info *mod_info, + gcov_unsigned_t len) +{ + gcov_unsigned_t src_filename_len, filename_len, i, num_strings; + mod_info->ident = gcov_read_unsigned (); + mod_info->is_primary = gcov_read_unsigned (); + mod_info->flags = gcov_read_unsigned (); + mod_info->lang = gcov_read_unsigned (); + mod_info->ggc_memory = gcov_read_unsigned (); + mod_info->num_quote_paths = gcov_read_unsigned (); + mod_info->num_bracket_paths = gcov_read_unsigned (); + mod_info->num_system_paths = gcov_read_unsigned (); + mod_info->num_cpp_defines = gcov_read_unsigned (); + mod_info->num_cpp_includes = gcov_read_unsigned (); + mod_info->num_cl_args = gcov_read_unsigned (); + len -= 11; + + filename_len = gcov_read_unsigned (); + mod_info->da_filename = (char *) xmalloc (filename_len * + sizeof (gcov_unsigned_t)); + for (i = 0; i < filename_len; i++) + ((gcov_unsigned_t *) mod_info->da_filename)[i] = gcov_read_unsigned (); + len -= (filename_len + 1); + + src_filename_len = gcov_read_unsigned (); + mod_info->source_filename = (char *) xmalloc (src_filename_len * + sizeof (gcov_unsigned_t)); + for (i = 0; i < src_filename_len; i++) + ((gcov_unsigned_t *) mod_info->source_filename)[i] = gcov_read_unsigned (); + len -= (src_filename_len + 1); + + num_strings = mod_info->num_quote_paths + mod_info->num_bracket_paths + + mod_info->num_system_paths + + mod_info->num_cpp_defines + mod_info->num_cpp_includes + + mod_info->num_cl_args; + len -= gcov_read_string_array (mod_info->string_array, num_strings); + gcc_assert (!len); +} +#endif + +/* We need to expose the below function when compiling for gcov-tool. */ + +#if !IN_LIBGCOV || defined (IN_GCOV_TOOL) +/* Reset to a known position. BASE should have been obtained from + gcov_position, LENGTH should be a record length. */ + +GCOV_LINKAGE void +gcov_sync (gcov_position_t base, gcov_unsigned_t length) +{ + gcc_assert (gcov_var.mode > 0); + base += length; + if (base - gcov_var.start <= gcov_var.length) + gcov_var.offset = base - gcov_var.start; + else + { + gcov_var.offset = gcov_var.length = 0; + _GCOV_fseek (gcov_var.file, base << 2, SEEK_SET); + gcov_var.start = _GCOV_ftell (gcov_var.file) >> 2; + } +} +#endif + +#if IN_LIBGCOV +/* Move to a given position in a gcov file. */ + +GCOV_LINKAGE void +gcov_seek (gcov_position_t base) +{ + gcc_assert (gcov_var.mode < 0); + if (gcov_var.offset) + gcov_write_block (gcov_var.offset); + _GCOV_fseek (gcov_var.file, base << 2, SEEK_SET); + gcov_var.start = _GCOV_ftell (gcov_var.file) >> 2; +} + +/* Truncate the gcov file at the current position. */ + +GCOV_LINKAGE void +gcov_truncate (void) +{ +#ifdef __KERNEL__ + gcc_assert (0); +#else + long offs; + int filenum; + gcc_assert (gcov_var.mode < 0); + if (gcov_var.offset) + gcov_write_block (gcov_var.offset); + offs = _GCOV_ftell (gcov_var.file); + filenum = fileno (gcov_var.file); + if (offs == -1 || filenum == -1 || _GCOV_ftruncate (filenum, offs)) + gcov_var.error = 1; +#endif /* __KERNEL__ */ +} +#endif + +#if IN_GCOV > 0 +/* Return the modification time of the current gcov file. */ + +GCOV_LINKAGE time_t +gcov_time (void) +{ + struct stat status; + + if (fstat (fileno (gcov_var.file), &status)) + return 0; + else + return status.st_mtime; +} +#endif /* IN_GCOV */ + +#if !IN_GCOV +/* Determine the index into histogram for VALUE. */ + +#if IN_LIBGCOV +static unsigned +#else +GCOV_LINKAGE unsigned +#endif +gcov_histo_index (gcov_type value) +{ + gcov_type_unsigned v = (gcov_type_unsigned)value; + unsigned r = 0; + unsigned prev2bits = 0; + + /* Find index into log2 scale histogram, where each of the log2 + sized buckets is divided into 4 linear sub-buckets for better + focus in the higher buckets. */ + + /* Find the place of the most-significant bit set. */ + if (v > 0) + { +#if IN_LIBGCOV + /* When building libgcov we don't include system.h, which includes + hwint.h (where floor_log2 is declared). However, libgcov.a + is built by the bootstrapped compiler and therefore the builtins + are always available. */ + r = sizeof (long long) * __CHAR_BIT__ - 1 - __builtin_clzll (v); +#else + /* We use floor_log2 from hwint.c, which takes a HOST_WIDE_INT + that is either 32 or 64 bits, and gcov_type_unsigned may be 64 bits. + Need to check for the case where gcov_type_unsigned is 64 bits + and HOST_WIDE_INT is 32 bits and handle it specially. */ +#if HOST_BITS_PER_WIDEST_INT == HOST_BITS_PER_WIDE_INT + r = floor_log2 (v); +#elif HOST_BITS_PER_WIDEST_INT == 2 * HOST_BITS_PER_WIDE_INT + HOST_WIDE_INT hwi_v = v >> HOST_BITS_PER_WIDE_INT; + if (hwi_v) + r = floor_log2 (hwi_v) + HOST_BITS_PER_WIDE_INT; + else + r = floor_log2 ((HOST_WIDE_INT)v); +#else + gcc_unreachable (); +#endif +#endif + } + + /* If at most the 2 least significant bits are set (value is + 0 - 3) then that value is our index into the lowest set of + four buckets. */ + if (r < 2) + return (unsigned)value; + + gcc_assert (r < 64); + + /* Find the two next most significant bits to determine which + of the four linear sub-buckets to select. */ + prev2bits = (v >> (r - 2)) & 0x3; + /* Finally, compose the final bucket index from the log2 index and + the next 2 bits. The minimum r value at this point is 2 since we + returned above if r was 2 or more, so the minimum bucket at this + point is 4. */ + return (r - 1) * 4 + prev2bits; +} + +/* Merge SRC_HISTO into TGT_HISTO. The counters are assumed to be in + the same relative order in both histograms, and are matched up + and merged in reverse order. Each counter is assigned an equal portion of + its entry's original cumulative counter value when computing the + new merged cum_value. */ + +static void gcov_histogram_merge (gcov_bucket_type *tgt_histo, + gcov_bucket_type *src_histo) +{ + int src_i, tgt_i, tmp_i = 0; + unsigned src_num, tgt_num, merge_num; + gcov_type src_cum, tgt_cum, merge_src_cum, merge_tgt_cum, merge_cum; + gcov_type merge_min; + gcov_bucket_type tmp_histo[GCOV_HISTOGRAM_SIZE]; + int src_done = 0; + + memset (tmp_histo, 0, sizeof (gcov_bucket_type) * GCOV_HISTOGRAM_SIZE); + + /* Assume that the counters are in the same relative order in both + histograms. Walk the histograms from largest to smallest entry, + matching up and combining counters in order. */ + src_num = 0; + src_cum = 0; + src_i = GCOV_HISTOGRAM_SIZE - 1; + for (tgt_i = GCOV_HISTOGRAM_SIZE - 1; tgt_i >= 0 && !src_done; tgt_i--) + { + tgt_num = tgt_histo[tgt_i].num_counters; + tgt_cum = tgt_histo[tgt_i].cum_value; + /* Keep going until all of the target histogram's counters at this + position have been matched and merged with counters from the + source histogram. */ + while (tgt_num > 0 && !src_done) + { + /* If this is either the first time through this loop or we just + exhausted the previous non-zero source histogram entry, look + for the next non-zero source histogram entry. */ + if (!src_num) + { + /* Locate the next non-zero entry. */ + while (src_i >= 0 && !src_histo[src_i].num_counters) + src_i--; + /* If source histogram has fewer counters, then just copy over the + remaining target counters and quit. */ + if (src_i < 0) + { + tmp_histo[tgt_i].num_counters += tgt_num; + tmp_histo[tgt_i].cum_value += tgt_cum; + if (!tmp_histo[tgt_i].min_value || + tgt_histo[tgt_i].min_value < tmp_histo[tgt_i].min_value) + tmp_histo[tgt_i].min_value = tgt_histo[tgt_i].min_value; + while (--tgt_i >= 0) + { + tmp_histo[tgt_i].num_counters + += tgt_histo[tgt_i].num_counters; + tmp_histo[tgt_i].cum_value += tgt_histo[tgt_i].cum_value; + if (!tmp_histo[tgt_i].min_value || + tgt_histo[tgt_i].min_value + < tmp_histo[tgt_i].min_value) + tmp_histo[tgt_i].min_value = tgt_histo[tgt_i].min_value; + } + + src_done = 1; + break; + } + + src_num = src_histo[src_i].num_counters; + src_cum = src_histo[src_i].cum_value; + } + + /* The number of counters to merge on this pass is the minimum + of the remaining counters from the current target and source + histogram entries. */ + merge_num = tgt_num; + if (src_num < merge_num) + merge_num = src_num; + + /* The merged min_value is the sum of the min_values from target + and source. */ + merge_min = tgt_histo[tgt_i].min_value + src_histo[src_i].min_value; + + /* Compute the portion of source and target entries' cum_value + that will be apportioned to the counters being merged. + The total remaining cum_value from each entry is divided + equally among the counters from that histogram entry if we + are not merging all of them. */ + merge_src_cum = src_cum; + if (merge_num < src_num) + merge_src_cum = merge_num * src_cum / src_num; + merge_tgt_cum = tgt_cum; + if (merge_num < tgt_num) + merge_tgt_cum = merge_num * tgt_cum / tgt_num; + /* The merged cum_value is the sum of the source and target + components. */ + merge_cum = merge_src_cum + merge_tgt_cum; + + /* Update the remaining number of counters and cum_value left + to be merged from this source and target entry. */ + src_cum -= merge_src_cum; + tgt_cum -= merge_tgt_cum; + src_num -= merge_num; + tgt_num -= merge_num; + + /* The merged counters get placed in the new merged histogram + at the entry for the merged min_value. */ + tmp_i = gcov_histo_index (merge_min); + gcc_assert (tmp_i < GCOV_HISTOGRAM_SIZE); + tmp_histo[tmp_i].num_counters += merge_num; + tmp_histo[tmp_i].cum_value += merge_cum; + if (!tmp_histo[tmp_i].min_value || + merge_min < tmp_histo[tmp_i].min_value) + tmp_histo[tmp_i].min_value = merge_min; + + /* Ensure the search for the next non-zero src_histo entry starts + at the next smallest histogram bucket. */ + if (!src_num) + src_i--; + } + } + + gcc_assert (tgt_i < 0); + + /* In the case where there were more counters in the source histogram, + accumulate the remaining unmerged cumulative counter values. Add + those to the smallest non-zero target histogram entry. Otherwise, + the total cumulative counter values in the histogram will be smaller + than the sum_all stored in the summary, which will complicate + computing the working set information from the histogram later on. */ + if (src_num) + src_i--; + while (src_i >= 0) + { + src_cum += src_histo[src_i].cum_value; + src_i--; + } + /* At this point, tmp_i should be the smallest non-zero entry in the + tmp_histo. */ + gcc_assert (tmp_i >= 0 && tmp_i < GCOV_HISTOGRAM_SIZE + && tmp_histo[tmp_i].num_counters > 0); + tmp_histo[tmp_i].cum_value += src_cum; + + /* Finally, copy the merged histogram into tgt_histo. */ + memcpy (tgt_histo, tmp_histo, + sizeof (gcov_bucket_type) * GCOV_HISTOGRAM_SIZE); +} +#endif /* !IN_GCOV */ + +/* This is used by gcov-dump (IN_GCOV == -1) and in the compiler + (!IN_GCOV && !IN_LIBGCOV). */ +#if IN_GCOV <= 0 && !IN_LIBGCOV +/* Compute the working set information from the counter histogram in + the profile summary. This is an array of information corresponding to a + range of percentages of the total execution count (sum_all), and includes + the number of counters required to cover that working set percentage and + the minimum counter value in that working set. */ + +GCOV_LINKAGE void +compute_working_sets (const struct gcov_ctr_summary *summary, + gcov_working_set_t *gcov_working_sets) +{ + gcov_type working_set_cum_values[NUM_GCOV_WORKING_SETS]; + gcov_type ws_cum_hotness_incr; + gcov_type cum, tmp_cum; + const gcov_bucket_type *histo_bucket; + unsigned ws_ix, c_num, count; + int h_ix; + + /* Compute the amount of sum_all that the cumulative hotness grows + by in each successive working set entry, which depends on the + number of working set entries. */ + ws_cum_hotness_incr = summary->sum_all / NUM_GCOV_WORKING_SETS; + + /* Next fill in an array of the cumulative hotness values corresponding + to each working set summary entry we are going to compute below. + Skip 0% statistics, which can be extrapolated from the + rest of the summary data. */ + cum = ws_cum_hotness_incr; + for (ws_ix = 0; ws_ix < NUM_GCOV_WORKING_SETS; + ws_ix++, cum += ws_cum_hotness_incr) + working_set_cum_values[ws_ix] = cum; + /* The last summary entry is reserved for (roughly) 99.9% of the + working set. Divide by 1024 so it becomes a shift, which gives + almost exactly 99.9%. */ + working_set_cum_values[NUM_GCOV_WORKING_SETS-1] + = summary->sum_all - summary->sum_all/1024; + + /* Next, walk through the histogram in decending order of hotness + and compute the statistics for the working set summary array. + As histogram entries are accumulated, we check to see which + working set entries have had their expected cum_value reached + and fill them in, walking the working set entries in increasing + size of cum_value. */ + ws_ix = 0; /* The current entry into the working set array. */ + cum = 0; /* The current accumulated counter sum. */ + count = 0; /* The current accumulated count of block counters. */ + for (h_ix = GCOV_HISTOGRAM_SIZE - 1; + h_ix >= 0 && ws_ix < NUM_GCOV_WORKING_SETS; h_ix--) + { + histo_bucket = &summary->histogram[h_ix]; + + /* If we haven't reached the required cumulative counter value for + the current working set percentage, simply accumulate this histogram + entry into the running sums and continue to the next histogram + entry. */ + if (cum + histo_bucket->cum_value < working_set_cum_values[ws_ix]) + { + cum += histo_bucket->cum_value; + count += histo_bucket->num_counters; + continue; + } + + /* If adding the current histogram entry's cumulative counter value + causes us to exceed the current working set size, then estimate + how many of this histogram entry's counter values are required to + reach the working set size, and fill in working set entries + as we reach their expected cumulative value. */ + for (c_num = 0, tmp_cum = cum; + c_num < histo_bucket->num_counters && ws_ix < NUM_GCOV_WORKING_SETS; + c_num++) + { + count++; + /* If we haven't reached the last histogram entry counter, add + in the minimum value again. This will underestimate the + cumulative sum so far, because many of the counter values in this + entry may have been larger than the minimum. We could add in the + average value every time, but that would require an expensive + divide operation. */ + if (c_num + 1 < histo_bucket->num_counters) + tmp_cum += histo_bucket->min_value; + /* If we have reached the last histogram entry counter, then add + in the entire cumulative value. */ + else + tmp_cum = cum + histo_bucket->cum_value; + + /* Next walk through successive working set entries and fill in + the statistics for any whose size we have reached by accumulating + this histogram counter. */ + while (ws_ix < NUM_GCOV_WORKING_SETS + && tmp_cum >= working_set_cum_values[ws_ix]) + { + gcov_working_sets[ws_ix].num_counters = count; + gcov_working_sets[ws_ix].min_counter + = histo_bucket->min_value; + ws_ix++; + } + } + /* Finally, update the running cumulative value since we were + using a temporary above. */ + cum += histo_bucket->cum_value; + } + gcc_assert (ws_ix == NUM_GCOV_WORKING_SETS); +} +#endif /* IN_GCOV <= 0 && !IN_LIBGCOV */
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/gcov-io.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/gcov-io.h new file mode 100644 index 0000000..895ff98 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/gcov-io.h
@@ -0,0 +1,531 @@ +/* File format for coverage information + Copyright (C) 1996-2014 Free Software Foundation, Inc. + Contributed by Bob Manson <manson@cygnus.com>. + Completely remangled by Nathan Sidwell <nathan@codesourcery.com>. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify it under +the terms of the GNU General Public License as published by the Free +Software Foundation; either version 3, or (at your option) any later +version. + +GCC is distributed in the hope that it will be useful, but WITHOUT ANY +WARRANTY; without even the implied warranty of MERCHANTABILITY or +FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License +for more details. + +Under Section 7 of GPL version 3, you are granted additional +permissions described in the GCC Runtime Library Exception, version +3.1, as published by the Free Software Foundation. + +You should have received a copy of the GNU General Public License and +a copy of the GCC Runtime Library Exception along with this program; +see the files COPYING3 and COPYING.RUNTIME respectively. If not, see +<http://www.gnu.org/licenses/>. */ + + +/* Coverage information is held in two files. A notes file, which is + generated by the compiler, and a data file, which is generated by + the program under test. Both files use a similar structure. We do + not attempt to make these files backwards compatible with previous + versions, as you only need coverage information when developing a + program. We do hold version information, so that mismatches can be + detected, and we use a format that allows tools to skip information + they do not understand or are not interested in. + + Numbers are recorded in the 32 bit unsigned binary form of the + endianness of the machine generating the file. 64 bit numbers are + stored as two 32 bit numbers, the low part first. Strings are + padded with 1 to 4 NUL bytes, to bring the length up to a multiple + of 4. The number of 4 bytes is stored, followed by the padded + string. Zero length and NULL strings are simply stored as a length + of zero (they have no trailing NUL or padding). + + int32: byte3 byte2 byte1 byte0 | byte0 byte1 byte2 byte3 + int64: int32:low int32:high + string: int32:0 | int32:length char* char:0 padding + padding: | char:0 | char:0 char:0 | char:0 char:0 char:0 + item: int32 | int64 | string + + The basic format of the files is + + file : int32:magic int32:version int32:stamp record* + + The magic ident is different for the notes and the data files. The + magic ident is used to determine the endianness of the file, when + reading. The version is the same for both files and is derived + from gcc's version number. The stamp value is used to synchronize + note and data files and to synchronize merging within a data + file. It need not be an absolute time stamp, merely a ticker that + increments fast enough and cycles slow enough to distinguish + different compile/run/compile cycles. + + Although the ident and version are formally 32 bit numbers, they + are derived from 4 character ASCII strings. The version number + consists of the single character major version number, a two + character minor version number (leading zero for versions less than + 10), and a single character indicating the status of the release. + That will be 'e' experimental, 'p' prerelease and 'r' for release. + Because, by good fortune, these are in alphabetical order, string + collating can be used to compare version strings. Be aware that + the 'e' designation will (naturally) be unstable and might be + incompatible with itself. For gcc 3.4 experimental, it would be + '304e' (0x33303465). When the major version reaches 10, the + letters A-Z will be used. Assuming minor increments releases every + 6 months, we have to make a major increment every 50 years. + Assuming major increments releases every 5 years, we're ok for the + next 155 years -- good enough for me. + + A record has a tag, length and variable amount of data. + + record: header data + header: int32:tag int32:length + data: item* + + Records are not nested, but there is a record hierarchy. Tag + numbers reflect this hierarchy. Tags are unique across note and + data files. Some record types have a varying amount of data. The + LENGTH is the number of 4bytes that follow and is usually used to + determine how much data. The tag value is split into 4 8-bit + fields, one for each of four possible levels. The most significant + is allocated first. Unused levels are zero. Active levels are + odd-valued, so that the LSB of the level is one. A sub-level + incorporates the values of its superlevels. This formatting allows + you to determine the tag hierarchy, without understanding the tags + themselves, and is similar to the standard section numbering used + in technical documents. Level values [1..3f] are used for common + tags, values [41..9f] for the notes file and [a1..ff] for the data + file. + + The notes file contains the following records + note: unit function-graph* + unit: header int32:checksum string:source + function-graph: announce_function basic_blocks {arcs | lines}* + announce_function: header int32:ident + int32:lineno_checksum int32:cfg_checksum + string:name string:source int32:lineno + basic_block: header int32:flags* + arcs: header int32:block_no arc* + arc: int32:dest_block int32:flags + lines: header int32:block_no line* + int32:0 string:NULL + line: int32:line_no | int32:0 string:filename + + The BASIC_BLOCK record holds per-bb flags. The number of blocks + can be inferred from its data length. There is one ARCS record per + basic block. The number of arcs from a bb is implicit from the + data length. It enumerates the destination bb and per-arc flags. + There is one LINES record per basic block, it enumerates the source + lines which belong to that basic block. Source file names are + introduced by a line number of 0, following lines are from the new + source file. The initial source file for the function is NULL, but + the current source file should be remembered from one LINES record + to the next. The end of a block is indicated by an empty filename + - this does not reset the current source file. Note there is no + ordering of the ARCS and LINES records: they may be in any order, + interleaved in any manner. The current filename follows the order + the LINES records are stored in the file, *not* the ordering of the + blocks they are for. + + The data file contains the following records. + data: {unit summary:program* build_info zero_fixup function-data*}* + unit: header int32:checksum + function-data: announce_function present counts + announce_function: header int32:ident + int32:lineno_checksum int32:cfg_checksum + present: header int32:present + counts: header int64:count* + summary: int32:checksum {count-summary}GCOV_COUNTERS_SUMMABLE + count-summary: int32:num int32:runs int64:sum + int64:max int64:sum_max histogram + histogram: {int32:bitvector}8 histogram-buckets* + histogram-buckets: int32:num int64:min int64:sum + build_info: string:info* + zero_fixup: int32:num int32:bitvector* + + The ANNOUNCE_FUNCTION record is the same as that in the note file, + but without the source location. The COUNTS gives the + counter values for instrumented features. The about the whole + program. The checksum is used for whole program summaries, and + disambiguates different programs which include the same + instrumented object file. There may be several program summaries, + each with a unique checksum. The object summary's checksum is + zero. Note that the data file might contain information from + several runs concatenated, or the data might be merged. + + BUILD_INFO record contains a list of strings that is used + to include in the data file information about the profile generate + build. For example, it can be used to include source revision + information that is useful in diagnosing profile mis-matches. + + ZERO_FIXUP record contains a count of functions in the gcda file + and an array of bitvectors indexed by the function index's in the + function-data section. Each bit flags whether the function was a + COMDAT that had all-zero profiles that was fixed up by dyn-ipa + using profiles from functions with matching checksums in other modules. + + This file is included by both the compiler, gcov tools and the + runtime support library libgcov. IN_LIBGCOV and IN_GCOV are used to + distinguish which case is which. If IN_LIBGCOV is nonzero, + libgcov is being built. If IN_GCOV is nonzero, the gcov tools are + being built. Otherwise the compiler is being built. IN_GCOV may be + positive or negative. If positive, we are compiling a tool that + requires additional functions (see the code for knowledge of what + those functions are). */ + +#ifndef GCC_GCOV_IO_H +#define GCC_GCOV_IO_H + +#ifndef __KERNEL__ +# define _GCOV_FILE FILE +# define _GCOV_fclose fclose +# define _GCOV_ftell ftell +# define _GCOV_fseek fseek +# define _GCOV_ftruncate ftruncate +# define _GCOV_fread fread +# define _GCOV_fwrite fwrite +# define _GCOV_fread fread +# define _GCOV_fileno fileno +# define _GCOV_fopen fopen +#endif + +#ifndef IN_LIBGCOV +/* About the host */ + +typedef unsigned gcov_unsigned_t; +typedef unsigned gcov_position_t; + +#if LONG_LONG_TYPE_SIZE > 32 +#define GCOV_TYPE_ATOMIC_FETCH_ADD_FN __atomic_fetch_add_8 +#define GCOV_TYPE_ATOMIC_FETCH_ADD BUILT_IN_ATOMIC_FETCH_ADD_8 +#else +#define GCOV_TYPE_ATOMIC_FETCH_ADD_FN __atomic_fetch_add_4 +#define GCOV_TYPE_ATOMIC_FETCH_ADD BUILT_IN_ATOMIC_FETCH_ADD_4 +#endif +#define PROFILE_GEN_EDGE_ATOMIC (flag_profile_gen_atomic == 1 || \ + flag_profile_gen_atomic == 3) +#define PROFILE_GEN_VALUE_ATOMIC (flag_profile_gen_atomic == 2 || \ + flag_profile_gen_atomic == 3) + +/* gcov_type is typedef'd elsewhere for the compiler */ +#if IN_GCOV +#define GCOV_LINKAGE static +typedef HOST_WIDEST_INT gcov_type; +typedef unsigned HOST_WIDEST_INT gcov_type_unsigned; +#if IN_GCOV > 0 +#include <sys/types.h> +#endif + +#define FUNC_ID_WIDTH HOST_BITS_PER_WIDE_INT/2 +#define FUNC_ID_MASK ((1L << FUNC_ID_WIDTH) - 1) +#define EXTRACT_MODULE_ID_FROM_GLOBAL_ID(gid) (unsigned)(((gid) >> FUNC_ID_WIDTH) & FUNC_ID_MASK) +#define EXTRACT_FUNC_ID_FROM_GLOBAL_ID(gid) (unsigned)((gid) & FUNC_ID_MASK) +#define FUNC_GLOBAL_ID(m,f) ((((HOST_WIDE_INT) (m)) << FUNC_ID_WIDTH) | (f) + +#else /*!IN_GCOV */ +#define GCOV_TYPE_SIZE (LONG_LONG_TYPE_SIZE > 32 ? 64 : 32) +#endif + +#if defined (HOST_HAS_F_SETLKW) +#define GCOV_LOCKED 1 +#else +#define GCOV_LOCKED 0 +#endif + +#define ATTRIBUTE_HIDDEN + +#endif /* !IN_LIBGOCV */ + +#ifndef GCOV_LINKAGE +#define GCOV_LINKAGE extern +#endif + +/* File suffixes. */ +#define GCOV_DATA_SUFFIX ".gcda" +#define GCOV_NOTE_SUFFIX ".gcno" + +/* File magic. Must not be palindromes. */ +#define GCOV_DATA_MAGIC ((gcov_unsigned_t)0x67636461) /* "gcda" */ +#define GCOV_NOTE_MAGIC ((gcov_unsigned_t)0x67636e6f) /* "gcno" */ + +/* gcov-iov.h is automatically generated by the makefile from + version.c, it looks like + #define GCOV_VERSION ((gcov_unsigned_t)0x89abcdef) +*/ +#include "gcov-iov.h" + +/* Convert a magic or version number to a 4 character string. */ +#define GCOV_UNSIGNED2STRING(ARRAY,VALUE) \ + ((ARRAY)[0] = (char)((VALUE) >> 24), \ + (ARRAY)[1] = (char)((VALUE) >> 16), \ + (ARRAY)[2] = (char)((VALUE) >> 8), \ + (ARRAY)[3] = (char)((VALUE) >> 0)) + +/* The record tags. Values [1..3f] are for tags which may be in either + file. Values [41..9f] for those in the note file and [a1..ff] for + the data file. The tag value zero is used as an explicit end of + file marker -- it is not required to be present. */ + +#define GCOV_TAG_FUNCTION ((gcov_unsigned_t)0x01000000) +#define GCOV_TAG_FUNCTION_LENGTH (3) +#define GCOV_TAG_BLOCKS ((gcov_unsigned_t)0x01410000) +#define GCOV_TAG_BLOCKS_LENGTH(NUM) (NUM) +#define GCOV_TAG_BLOCKS_NUM(LENGTH) (LENGTH) +#define GCOV_TAG_ARCS ((gcov_unsigned_t)0x01430000) +#define GCOV_TAG_ARCS_LENGTH(NUM) (1 + (NUM) * 2) +#define GCOV_TAG_ARCS_NUM(LENGTH) (((LENGTH) - 1) / 2) +#define GCOV_TAG_LINES ((gcov_unsigned_t)0x01450000) +#define GCOV_TAG_COUNTER_BASE ((gcov_unsigned_t)0x01a10000) +#define GCOV_TAG_COUNTER_LENGTH(NUM) ((NUM) * 2) +#define GCOV_TAG_COUNTER_NUM(LENGTH) ((LENGTH) / 2) +#define GCOV_TAG_OBJECT_SUMMARY ((gcov_unsigned_t)0xa1000000) /* Obsolete */ +#define GCOV_TAG_PROGRAM_SUMMARY ((gcov_unsigned_t)0xa3000000) +#define GCOV_TAG_COMDAT_ZERO_FIXUP ((gcov_unsigned_t)0xa9000000) +/* Ceiling divide by 32 bit word size, plus one word to hold NUM. */ +#define GCOV_TAG_COMDAT_ZERO_FIXUP_LENGTH(NUM) (1 + (NUM + 31) / 32) +#define GCOV_TAG_SUMMARY_LENGTH(NUM) \ + (1 + GCOV_COUNTERS_SUMMABLE * (10 + 3 * 2) + (NUM) * 5) +#define GCOV_TAG_BUILD_INFO ((gcov_unsigned_t)0xa7000000) +#define GCOV_TAG_MODULE_INFO ((gcov_unsigned_t)0xab000000) +#define GCOV_TAG_AFDO_FILE_NAMES ((gcov_unsigned_t)0xaa000000) +#define GCOV_TAG_AFDO_FUNCTION ((gcov_unsigned_t)0xac000000) +#define GCOV_TAG_AFDO_MODULE_GROUPING ((gcov_unsigned_t)0xae000000) +#define GCOV_TAG_AFDO_WORKING_SET ((gcov_unsigned_t)0xaf000000) + +/* Counters that are collected. */ +#define DEF_GCOV_COUNTER(COUNTER, NAME, MERGE_FN) COUNTER, +enum { +#include "gcov-counter.def" +GCOV_COUNTERS +}; +#undef DEF_GCOV_COUNTER + +/* Counters which can be summaried. */ +#define GCOV_COUNTERS_SUMMABLE (GCOV_COUNTER_ARCS + 1) + +/* The first of counters used for value profiling. They must form a + consecutive interval and their order must match the order of + HIST_TYPEs in value-prof.h. */ +#define GCOV_FIRST_VALUE_COUNTER GCOV_COUNTERS_SUMMABLE + +/* The last of counters used for value profiling. */ +#define GCOV_LAST_VALUE_COUNTER (GCOV_COUNTERS - 2) + +/* Number of counters used for value profiling. */ +#define GCOV_N_VALUE_COUNTERS \ + (GCOV_LAST_VALUE_COUNTER - GCOV_FIRST_VALUE_COUNTER + 1) + +#define GCOV_ICALL_TOPN_VAL 2 /* Track two hottest callees */ +#define GCOV_ICALL_TOPN_NCOUNTS 9 /* The number of counter entries per icall callsite */ + +/* Convert a counter index to a tag. */ +#define GCOV_TAG_FOR_COUNTER(COUNT) \ + (GCOV_TAG_COUNTER_BASE + ((gcov_unsigned_t)(COUNT) << 17)) +/* Convert a tag to a counter. */ +#define GCOV_COUNTER_FOR_TAG(TAG) \ + ((unsigned)(((TAG) - GCOV_TAG_COUNTER_BASE) >> 17)) +/* Check whether a tag is a counter tag. */ +#define GCOV_TAG_IS_COUNTER(TAG) \ + (!((TAG) & 0xFFFF) && GCOV_COUNTER_FOR_TAG (TAG) < GCOV_COUNTERS) + +/* The tag level mask has 1's in the position of the inner levels, & + the lsb of the current level, and zero on the current and outer + levels. */ +#define GCOV_TAG_MASK(TAG) (((TAG) - 1) ^ (TAG)) + +/* Return nonzero if SUB is an immediate subtag of TAG. */ +#define GCOV_TAG_IS_SUBTAG(TAG,SUB) \ + (GCOV_TAG_MASK (TAG) >> 8 == GCOV_TAG_MASK (SUB) \ + && !(((SUB) ^ (TAG)) & ~GCOV_TAG_MASK (TAG))) + +/* Return nonzero if SUB is at a sublevel to TAG. */ +#define GCOV_TAG_IS_SUBLEVEL(TAG,SUB) \ + (GCOV_TAG_MASK (TAG) > GCOV_TAG_MASK (SUB)) + +/* Basic block flags. */ +#define GCOV_BLOCK_UNEXPECTED (1 << 1) + +/* Arc flags. */ +#define GCOV_ARC_ON_TREE (1 << 0) +#define GCOV_ARC_FAKE (1 << 1) +#define GCOV_ARC_FALLTHROUGH (1 << 2) + +/* Structured records. */ + +/* Structure used for each bucket of the log2 histogram of counter values. */ +typedef struct +{ + /* Number of counters whose profile count falls within the bucket. */ + gcov_unsigned_t num_counters; + /* Smallest profile count included in this bucket. */ + gcov_type min_value; + /* Cumulative value of the profile counts in this bucket. */ + gcov_type cum_value; +} gcov_bucket_type; + +/* For a log2 scale histogram with each range split into 4 + linear sub-ranges, there will be at most 64 (max gcov_type bit size) - 1 log2 + ranges since the lowest 2 log2 values share the lowest 4 linear + sub-range (values 0 - 3). This is 252 total entries (63*4). */ + +#define GCOV_HISTOGRAM_SIZE 252 + +/* How many unsigned ints are required to hold a bit vector of non-zero + histogram entries when the histogram is written to the gcov file. + This is essentially a ceiling divide by 32 bits. */ +#define GCOV_HISTOGRAM_BITVECTOR_SIZE (GCOV_HISTOGRAM_SIZE + 31) / 32 + +/* Cumulative counter data. */ +struct gcov_ctr_summary +{ + gcov_unsigned_t num; /* number of counters. */ + gcov_unsigned_t runs; /* number of program runs */ + gcov_type sum_all; /* sum of all counters accumulated. */ + gcov_type run_max; /* maximum value on a single run. */ + gcov_type sum_max; /* sum of individual run max values. */ + gcov_bucket_type histogram[GCOV_HISTOGRAM_SIZE]; /* histogram of + counter values. */ +}; + +/* Object & program summary record. */ +struct gcov_summary +{ + gcov_unsigned_t checksum; /* checksum of program */ + struct gcov_ctr_summary ctrs[GCOV_COUNTERS_SUMMABLE]; +}; + +#define GCOV_MODULE_UNKNOWN_LANG 0 +#define GCOV_MODULE_C_LANG 1 +#define GCOV_MODULE_CPP_LANG 2 +#define GCOV_MODULE_FORT_LANG 3 + +#define GCOV_MODULE_ASM_STMTS (1 << 16) +#define GCOV_MODULE_LANG_MASK 0xffff + +/* Source module info. The data structure is used in + both runtime and profile-use phase. Make sure to allocate + enough space for the variable length member. */ +struct gcov_module_info +{ + gcov_unsigned_t ident; + gcov_unsigned_t is_primary; /* this is overloaded to mean two things: + (1) means FDO/LIPO in instrumented binary. + (2) means IS_PRIMARY in persistent file or + memory copy used in profile-use. */ + gcov_unsigned_t flags; /* bit 0: is_exported, + bit 1: need to include all the auxiliary + modules in use compilation. */ + gcov_unsigned_t lang; /* lower 16 bits encode the language, and the upper + 16 bits enocde other attributes, such as whether + any assembler is present in the source, etc. */ + gcov_unsigned_t ggc_memory; /* memory needed for parsing in kb */ + char *da_filename; + char *source_filename; + gcov_unsigned_t num_quote_paths; + gcov_unsigned_t num_bracket_paths; + gcov_unsigned_t num_system_paths; + gcov_unsigned_t num_cpp_defines; + gcov_unsigned_t num_cpp_includes; + gcov_unsigned_t num_cl_args; + char *string_array[1]; +}; + +extern struct gcov_module_info **module_infos; +extern unsigned primary_module_id; +#define SET_MODULE_INCLUDE_ALL_AUX(modu) ((modu->flags |= 0x2)) +#define MODULE_INCLUDE_ALL_AUX_FLAG(modu) ((modu->flags & 0x2)) +#define SET_MODULE_EXPORTED(modu) ((modu->flags |= 0x1)) +#define MODULE_EXPORTED_FLAG(modu) ((modu->flags & 0x1)) +#define PRIMARY_MODULE_EXPORTED \ + (MODULE_EXPORTED_FLAG (module_infos[0]) \ + && !((module_infos[0]->lang & GCOV_MODULE_ASM_STMTS) \ + && flag_ripa_disallow_asm_modules)) + +#if !defined(inhibit_libc) + +/* Functions for reading and writing gcov files. In libgcov you can + open the file for reading then writing. Elsewhere you can open the + file either for reading or for writing. When reading a file you may + use the gcov_read_* functions, gcov_sync, gcov_position, & + gcov_error. When writing a file you may use the gcov_write + functions, gcov_seek & gcov_error. When a file is to be rewritten + you use the functions for reading, then gcov_rewrite then the + functions for writing. Your file may become corrupted if you break + these invariants. */ + +#if !IN_LIBGCOV +GCOV_LINKAGE int gcov_open (const char */*name*/, int /*direction*/); +GCOV_LINKAGE int gcov_magic (gcov_unsigned_t, gcov_unsigned_t); +#endif + +/* Available everywhere. */ +GCOV_LINKAGE int gcov_close (void) ATTRIBUTE_HIDDEN; +GCOV_LINKAGE gcov_unsigned_t gcov_read_unsigned (void) ATTRIBUTE_HIDDEN; +GCOV_LINKAGE gcov_type gcov_read_counter (void) ATTRIBUTE_HIDDEN; +GCOV_LINKAGE void gcov_read_summary (struct gcov_summary *) ATTRIBUTE_HIDDEN; +GCOV_LINKAGE int *gcov_read_comdat_zero_fixup (gcov_unsigned_t, + gcov_unsigned_t *) + ATTRIBUTE_HIDDEN; +GCOV_LINKAGE char **gcov_read_build_info (gcov_unsigned_t, gcov_unsigned_t *) + ATTRIBUTE_HIDDEN; +GCOV_LINKAGE const char *gcov_read_string (void); +GCOV_LINKAGE void gcov_sync (gcov_position_t /*base*/, + gcov_unsigned_t /*length */); +GCOV_LINKAGE gcov_unsigned_t gcov_read_string_array (char **, gcov_unsigned_t) + ATTRIBUTE_HIDDEN; + + +#if !IN_LIBGCOV && IN_GCOV != 1 +GCOV_LINKAGE void gcov_read_module_info (struct gcov_module_info *mod_info, + gcov_unsigned_t len) ATTRIBUTE_HIDDEN; +#endif + +#if !IN_GCOV +/* Available outside gcov */ +GCOV_LINKAGE void gcov_write_unsigned (gcov_unsigned_t) ATTRIBUTE_HIDDEN; +GCOV_LINKAGE gcov_unsigned_t gcov_compute_string_array_len (char **, + gcov_unsigned_t) + ATTRIBUTE_HIDDEN; +GCOV_LINKAGE void gcov_write_string_array (char **, gcov_unsigned_t) + ATTRIBUTE_HIDDEN; +#endif + +#if !IN_GCOV && !IN_LIBGCOV +/* Available only in compiler */ +GCOV_LINKAGE unsigned gcov_histo_index (gcov_type value); +GCOV_LINKAGE void gcov_write_string (const char *); +GCOV_LINKAGE gcov_position_t gcov_write_tag (gcov_unsigned_t); +GCOV_LINKAGE void gcov_write_length (gcov_position_t /*position*/); +#endif + +#if IN_GCOV <= 0 && !IN_LIBGCOV +/* Available in gcov-dump and the compiler. */ + +/* Number of data points in the working set summary array. Using 128 + provides information for at least every 1% increment of the total + profile size. The last entry is hardwired to 99.9% of the total. */ +#define NUM_GCOV_WORKING_SETS 128 + +/* Working set size statistics for a given percentage of the entire + profile (sum_all from the counter summary). */ +typedef struct gcov_working_set_info +{ + /* Number of hot counters included in this working set. */ + unsigned num_counters; + /* Smallest counter included in this working set. */ + gcov_type min_counter; +} gcov_working_set_t; + +GCOV_LINKAGE void compute_working_sets (const struct gcov_ctr_summary *summary, + gcov_working_set_t *gcov_working_sets); +#endif + +#if IN_GCOV > 0 +/* Available in gcov */ +GCOV_LINKAGE time_t gcov_time (void); +#endif + +#endif /* !inhibit_libc */ + +#endif /* GCC_GCOV_IO_H */
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/gcov-iov.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/gcov-iov.h new file mode 100644 index 0000000..0da8e1d --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/gcov-iov.h
@@ -0,0 +1,4 @@ +/* Generated automatically by the program `build/gcov-iov' + from `4.9.x (4 9) and prerelease (*)'. */ + +#define GCOV_VERSION ((gcov_unsigned_t)0x3430392a) /* 409* */
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/libgcov-driver-kernel.c b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/libgcov-driver-kernel.c new file mode 100644 index 0000000..34298ed --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/libgcov-driver-kernel.c
@@ -0,0 +1,203 @@ +/* Routines required for instrumenting a program. */ +/* Compile this one with gcc. */ +/* Copyright (C) 1989-2014 Free Software Foundation, Inc. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify it under +the terms of the GNU General Public License as published by the Free +Software Foundation; either version 3, or (at your option) any later +version. + +GCC is distributed in the hope that it will be useful, but WITHOUT ANY +WARRANTY; without even the implied warranty of MERCHANTABILITY or +FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License +for more details. + +Under Section 7 of GPL version 3, you are granted additional +permissions described in the GCC Runtime Library Exception, version +3.1, as published by the Free Software Foundation. + +You should have received a copy of the GNU General Public License and +a copy of the GCC Runtime Library Exception along with this program; +see the files COPYING3 and COPYING.RUNTIME respectively. If not, see +<http://www.gnu.org/licenses/>. */ + + +/* A utility function for outputing errors. */ + +static int __attribute__((format(printf, 1, 2))) +gcov_error (const char *fmt, ...) +{ + int ret; + va_list argp; + va_start (argp, fmt); + ret = vprintk (fmt, argp); + va_end (argp); + return ret; +} + +static void +allocate_filename_struct (struct gcov_filename_aux *gf) +{ + const char *gcov_prefix; + int gcov_prefix_strip = 0; + size_t prefix_length = 0; + char *gi_filename_up; + + /* Allocate and initialize the filename scratch space plus one. */ + gi_filename = (char *) xmalloc (prefix_length + gcov_max_filename + 2); + if (prefix_length) + memcpy (gi_filename, gcov_prefix, prefix_length); + gi_filename_up = gi_filename + prefix_length; + + gf->gi_filename_up = gi_filename_up; + gf->prefix_length = prefix_length; + gf->gcov_prefix_strip = gcov_prefix_strip; +} + +static int +gcov_open_by_filename (char *gi_filename) +{ + gcov_open (gi_filename); + return 0; +} + + +/* Strip GCOV_PREFIX_STRIP levels of leading '/' from FILENAME and + put the result into GI_FILENAME_UP. */ + +static void +gcov_strip_leading_dirs (int prefix_length, int gcov_prefix_strip, + const char *filename, char *gi_filename_up) +{ + strcpy (gi_filename_up, filename); +} + +/* Current virual gcda file. This is for kernel use only. */ +gcov_kernel_vfile *gcov_current_file; + +/* Set current virutal gcda file. It needs to be set before dumping + profile data. */ + +void +gcov_set_vfile (gcov_kernel_vfile *file) +{ + gcov_current_file = file; +} + +/* File fclose operation in kernel mode. */ + +int +kernel_file_fclose (gcov_kernel_vfile *fp) +{ + return 0; +} + +/* File ftell operation in kernel mode. It currently should not + be called. */ + +long +kernel_file_ftell (gcov_kernel_vfile *fp) +{ + return 0; +} + +/* File fseek operation in kernel mode. It should only be called + with OFFSET==0 and WHENCE==0 to a freshly opened file. */ + +int +kernel_file_fseek (gcov_kernel_vfile *fp, long offset, int whence) +{ + gcc_assert (offset == 0 && whence == 0 && fp->count == 0); + return 0; +} + +/* File ftruncate operation in kernel mode. It currently should not + be called. */ + +int +kernel_file_ftruncate (gcov_kernel_vfile *fp, off_t value) +{ + gcc_assert (0); /* should not reach here */ + return 0; +} + +/* File fread operation in kernel mode. It currently should not + be called. */ + +int +kernel_file_fread (void *ptr, size_t size, size_t nitems, + gcov_kernel_vfile *fp) +{ + gcc_assert (0); /* should not reach here */ + return 0; +} + +/* File fwrite operation in kernel mode. It outputs the data + to a buffer in the virual file. */ + +int +kernel_file_fwrite (const void *ptr, size_t size, + size_t nitems, gcov_kernel_vfile *fp) +{ + char *vbuf; + unsigned vsize, vpos; + unsigned len; + + if (!fp) return 0; + + vbuf = fp->buf; + vsize = fp->size; + vpos = fp->count; + + + if (vsize < vpos) + { + printk (KERN_ERR + "GCOV_KERNEL: something wrong in file %s: vbuf=%p vsize=%u" + " vpos=%u\n", + fp->info->filename, vbuf, vsize, vpos); + return 0; + } + + len = vsize - vpos; + len /= size; + + /* Increase the virtual file size if it is not suffcient. */ + while (len < nitems) + { + vsize *= 2; + len = vsize - vpos; + len /= size; + } + + if (vsize != fp->size) + { + vbuf = fp->buf = (char *) gcov_realloc_file_buf(vsize, vpos); + fp->size = vsize; + } + + if (len > nitems) + len = nitems; + + memcpy (vbuf+vpos, ptr, size*len); + fp->count += len*size; + + if (len != nitems) + printk (KERN_ERR + "GCOV_KERNEL: something wrong in file %s: size=%lu nitems=%lu" + " len=%d vsize=%u vpos=%u \n", + fp->info->filename, size, nitems, len, vsize, vpos); + return len; +} + +/* File fileno operation in kernel mode. It currently should not + be called. */ + +int +kernel_file_fileno (gcov_kernel_vfile *fp) +{ + gcc_assert (0); /* should not reach here */ + return 0; +}
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/libgcov-driver.c b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/libgcov-driver.c new file mode 100644 index 0000000..3c569f1 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/libgcov-driver.c
@@ -0,0 +1,1550 @@ +/* Routines required for instrumenting a program. */ +/* Compile this one with gcc. */ +/* Copyright (C) 1989-2014 Free Software Foundation, Inc. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify it under +the terms of the GNU General Public License as published by the Free +Software Foundation; either version 3, or (at your option) any later +version. + +GCC is distributed in the hope that it will be useful, but WITHOUT ANY +WARRANTY; without even the implied warranty of MERCHANTABILITY or +FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License +for more details. + +Under Section 7 of GPL version 3, you are granted additional +permissions described in the GCC Runtime Library Exception, version +3.1, as published by the Free Software Foundation. + +You should have received a copy of the GNU General Public License and +a copy of the GCC Runtime Library Exception along with this program; +see the files COPYING3 and COPYING.RUNTIME respectively. If not, see +<http://www.gnu.org/licenses/>. */ + +#include "libgcov.h" + +#if defined(inhibit_libc) +/* If libc and its header files are not available, provide dummy functions. */ + +#if defined(L_gcov) +void __gcov_init (struct gcov_info *p __attribute__ ((unused))) {} +#endif + +#else /* inhibit_libc */ + +#if !defined(__KERNEL__) +#include <string.h> +#if GCOV_LOCKED +#include <fcntl.h> +#include <errno.h> +#include <sys/stat.h> +#endif +#endif /* __KERNEL__ */ + +#ifdef L_gcov +#include "gcov-io.c" + +/* Unique identifier assigned to each module (object file). */ +static gcov_unsigned_t gcov_cur_module_id = 0; + + +/* Dynamic call graph build and form module groups. */ +int __gcov_compute_module_groups (char **zero_counts) ATTRIBUTE_HIDDEN; +void __gcov_finalize_dyn_callgraph (void) ATTRIBUTE_HIDDEN; + +/* The following functions can be called from outside of this file. */ +extern void gcov_clear (void) ATTRIBUTE_HIDDEN; +extern void gcov_exit (void) ATTRIBUTE_HIDDEN; +extern void set_gcov_dump_complete (void) ATTRIBUTE_HIDDEN; +extern void reset_gcov_dump_complete (void) ATTRIBUTE_HIDDEN; +extern int get_gcov_dump_complete (void) ATTRIBUTE_HIDDEN; +extern void set_gcov_list (struct gcov_info *) ATTRIBUTE_HIDDEN; +__attribute__((weak)) void __coverage_callback (gcov_type, int); + +#if !defined(IN_GCOV_TOOL) && !defined(__KERNEL__) +extern gcov_unsigned_t __gcov_sampling_period; +extern gcov_unsigned_t __gcov_has_sampling; +static int gcov_sampling_period_initialized = 0; + +/* Create a strong reference to these symbols so that they are + unconditionally pulled into the instrumented binary, even when + the only reference is a weak reference. This is necessary because + we are using weak references to enable references from code that + may not be linked with libgcov. These are the only symbols that + should be accessed via link references from application code! + + A subtlety of the linker is that it will only resolve weak references + defined within archive libraries when there is a strong reference to + something else defined within the same object file. Since these functions + are defined within their own object files, they would not automatically + get resolved. Since there are symbols within the main L_gcov + section that are strongly referenced during -fprofile-generate and + -ftest-coverage builds, these dummy symbols will always need to be + resolved. */ +void (*__gcov_dummy_ref1)(void) = &__gcov_reset; +void (*__gcov_dummy_ref2)(void) = &__gcov_dump; +extern char *__gcov_get_profile_prefix (void); +char *(*__gcov_dummy_ref3)(void) = &__gcov_get_profile_prefix; +extern void __gcov_set_sampling_period (unsigned int period); +char *(*__gcov_dummy_ref4)(void) = &__gcov_set_sampling_period; +extern unsigned int __gcov_sampling_enabled (void); +char *(*__gcov_dummy_ref5)(void) = &__gcov_sampling_enabled; +extern void __gcov_flush (void); +char *(*__gcov_dummy_ref6)(void) = &__gcov_flush; +extern unsigned int __gcov_profiling_for_test_coverage (void); +char *(*__gcov_dummy_ref7)(void) = &__gcov_profiling_for_test_coverage; +#endif + +/* Default callback function for profile instrumentation callback. */ +__attribute__((weak)) void +__coverage_callback (gcov_type funcdef_no __attribute__ ((unused)), + int edge_no __attribute__ ((unused))) +{ + /* nothing */ +} + +struct gcov_fn_buffer +{ + struct gcov_fn_buffer *next; + unsigned fn_ix; + struct gcov_fn_info info; + /* note gcov_fn_info ends in a trailing array. */ +}; + +struct gcov_summary_buffer +{ + struct gcov_summary_buffer *next; + struct gcov_summary summary; +}; + +/* Chain of per-object gcov structures. */ +extern struct gcov_info *__gcov_list; + +/* Set the head of gcov_list. */ +void +set_gcov_list (struct gcov_info *head) +{ + __gcov_list = head; +} + +/* Flag if the current function being read was marked as having fixed-up + zero counters. */ +static int __gcov_curr_fn_fixed_up; + +/* Set function fixed up flag. */ +void +set_gcov_fn_fixed_up (int fixed_up) +{ + __gcov_curr_fn_fixed_up = fixed_up; +} + +/* Return function fixed up flag. */ +int +get_gcov_fn_fixed_up (void) +{ + return __gcov_curr_fn_fixed_up; +} + +/* Size of the longest file name. */ +/* We need to expose this static variable when compiling for gcov-tool. */ +#ifndef IN_GCOV_TOOL +static +#endif +size_t gcov_max_filename = 0; + +/* Flag when the profile has already been dumped via __gcov_dump(). */ +static int gcov_dump_complete; + +/* A global function that get the vaule of gcov_dump_complete. */ + +int +get_gcov_dump_complete (void) +{ + return gcov_dump_complete; +} + +/* A global functino that set the vaule of gcov_dump_complete. Will + be used in __gcov_dump() in libgcov-interface.c. */ + +void +set_gcov_dump_complete (void) +{ + gcov_dump_complete = 1; +} + +/* A global functino that set the vaule of gcov_dump_complete. Will + be used in __gcov_reset() in libgcov-interface.c. */ + +void +reset_gcov_dump_complete (void) +{ + gcov_dump_complete = 0; +} + +/* A utility function for outputing errors. */ +static int gcov_error (const char *, ...); + +static struct gcov_fn_buffer * +free_fn_data (const struct gcov_info *gi_ptr, struct gcov_fn_buffer *buffer, + unsigned limit) +{ + struct gcov_fn_buffer *next; + unsigned ix, n_ctr = 0; + + if (!buffer) + return 0; + next = buffer->next; + + for (ix = 0; ix != limit; ix++) + if (gi_ptr->merge[ix]) + xfree (buffer->info.ctrs[n_ctr++].values); + xfree (buffer); + return next; +} + +static struct gcov_fn_buffer ** +buffer_fn_data (const char *filename, const struct gcov_info *gi_ptr, + struct gcov_fn_buffer **end_ptr, unsigned fn_ix) +{ + unsigned n_ctrs = 0, ix = 0; + struct gcov_fn_buffer *fn_buffer; + unsigned len; + + for (ix = GCOV_COUNTERS; ix--;) + if (gi_ptr->merge[ix]) + n_ctrs++; + + len = sizeof (*fn_buffer) + sizeof (fn_buffer->info.ctrs[0]) * n_ctrs; + fn_buffer = (struct gcov_fn_buffer *) xmalloc (len); + + if (!fn_buffer) + goto fail; + + fn_buffer->next = 0; + fn_buffer->fn_ix = fn_ix; + fn_buffer->info.ident = gcov_read_unsigned (); + fn_buffer->info.lineno_checksum = gcov_read_unsigned (); + fn_buffer->info.cfg_checksum = gcov_read_unsigned (); + + for (n_ctrs = ix = 0; ix != GCOV_COUNTERS; ix++) + { + gcov_unsigned_t length; + gcov_type *values; + + if (!gi_ptr->merge[ix]) + continue; + + if (gcov_read_unsigned () != GCOV_TAG_FOR_COUNTER (ix)) + { + len = 0; + goto fail; + } + + length = GCOV_TAG_COUNTER_NUM (gcov_read_unsigned ()); + len = length * sizeof (gcov_type); + values = (gcov_type *) xmalloc (len); + if (!values) + goto fail; + + fn_buffer->info.ctrs[n_ctrs].num = length; + fn_buffer->info.ctrs[n_ctrs].values = values; + + while (length--) + *values++ = gcov_read_counter (); + n_ctrs++; + } + + *end_ptr = fn_buffer; + return &fn_buffer->next; + +fail: + gcov_error ("profiling:%s:Function %u %s %u \n", filename, fn_ix, + len ? "cannot allocate" : "counter mismatch", len ? len : ix); + + return (struct gcov_fn_buffer **)free_fn_data (gi_ptr, fn_buffer, ix); +} + +/* Determine whether a counter is active. */ + +static inline int +gcov_counter_active (const struct gcov_info *info, unsigned int type) +{ + return (info->merge[type] != 0); +} + +/* Add an unsigned value to the current crc */ + +static gcov_unsigned_t +crc32_unsigned (gcov_unsigned_t crc32, gcov_unsigned_t value) +{ + unsigned ix; + + for (ix = 32; ix--; value <<= 1) + { + unsigned feedback; + + feedback = (value ^ crc32) & 0x80000000 ? 0x04c11db7 : 0; + crc32 <<= 1; + crc32 ^= feedback; + } + + return crc32; +} + +/* Check if VERSION of the info block PTR matches libgcov one. + Return 1 on success, or zero in case of versions mismatch. + If FILENAME is not NULL, its value used for reporting purposes + instead of value from the info block. */ + +static int +gcov_version (struct gcov_info *ptr, gcov_unsigned_t version, + const char *filename) +{ + if (version != GCOV_VERSION) + { + char v[4], e[4]; + + GCOV_UNSIGNED2STRING (v, version); + GCOV_UNSIGNED2STRING (e, GCOV_VERSION); + + if (filename) + gcov_error ("profiling:%s:Version mismatch - expected %.4s got %.4s\n", + filename? filename : ptr->filename, e, v); + else + gcov_error ("profiling:Version mismatch - expected %.4s got %.4s\n", e, v); + + return 0; + } + return 1; +} + +/* Insert counter VALUE into HISTOGRAM. */ + +static void +gcov_histogram_insert(gcov_bucket_type *histogram, gcov_type value) +{ + unsigned i; + + i = gcov_histo_index(value); + histogram[i].num_counters++; + histogram[i].cum_value += value; + if (value < histogram[i].min_value) + histogram[i].min_value = value; +} + +/* Computes a histogram of the arc counters to place in the summary SUM. */ + +static void +gcov_compute_histogram (struct gcov_summary *sum) +{ + struct gcov_info *gi_ptr; + const struct gcov_fn_info *gfi_ptr; + const struct gcov_ctr_info *ci_ptr; + struct gcov_ctr_summary *cs_ptr; + unsigned t_ix, f_ix, ctr_info_ix, ix; + int h_ix; + + /* This currently only applies to arc counters. */ + t_ix = GCOV_COUNTER_ARCS; + + /* First check if there are any counts recorded for this counter. */ + cs_ptr = &(sum->ctrs[t_ix]); + if (!cs_ptr->num) + return; + + for (h_ix = 0; h_ix < GCOV_HISTOGRAM_SIZE; h_ix++) + { + cs_ptr->histogram[h_ix].num_counters = 0; + cs_ptr->histogram[h_ix].min_value = cs_ptr->run_max; + cs_ptr->histogram[h_ix].cum_value = 0; + } + + /* Walk through all the per-object structures and record each of + the count values in histogram. */ + for (gi_ptr = __gcov_list; gi_ptr; gi_ptr = gi_ptr->next) + { + if (!gi_ptr->merge[t_ix]) + continue; + + /* Find the appropriate index into the gcov_ctr_info array + for the counter we are currently working on based on the + existence of the merge function pointer for this object. */ + for (ix = 0, ctr_info_ix = 0; ix < t_ix; ix++) + { + if (gi_ptr->merge[ix]) + ctr_info_ix++; + } + for (f_ix = 0; f_ix != gi_ptr->n_functions; f_ix++) + { + gfi_ptr = gi_ptr->functions[f_ix]; + + if (!gfi_ptr || gfi_ptr->key != gi_ptr) + continue; + + ci_ptr = &gfi_ptr->ctrs[ctr_info_ix]; + for (ix = 0; ix < ci_ptr->num; ix++) + gcov_histogram_insert (cs_ptr->histogram, ci_ptr->values[ix]); + } + } +} + +/* gcda filename. */ +static char *gi_filename; +/* buffer for the fn_data from another program. */ +static struct gcov_fn_buffer *fn_buffer; +/* buffer for summary from other programs to be written out. */ +static struct gcov_summary_buffer *sum_buffer; +/* If application calls fork or exec multiple times, we end up storing + profile repeadely. We should not account this as multiple runs or + functions executed once may mistakely become cold. */ +static int run_accounted = 0; + +/* This funtions computes the program level summary and the histo-gram. + It computes and returns CRC32 and stored summary in THIS_PRG. */ + +#if !IN_GCOV_TOOL +static +#endif +gcov_unsigned_t +gcov_exit_compute_summary (struct gcov_summary *this_prg) +{ + struct gcov_info *gi_ptr; + const struct gcov_fn_info *gfi_ptr; + struct gcov_ctr_summary *cs_ptr; + const struct gcov_ctr_info *ci_ptr; + int f_ix; + unsigned t_ix; + gcov_unsigned_t c_num; + gcov_unsigned_t crc32 = 0; + + /* Find the totals for this execution. */ + memset (this_prg, 0, sizeof (*this_prg)); + for (gi_ptr = __gcov_list; gi_ptr; gi_ptr = gi_ptr->next) + { + crc32 = crc32_unsigned (crc32, gi_ptr->stamp); + crc32 = crc32_unsigned (crc32, gi_ptr->n_functions); + + for (f_ix = 0; (unsigned)f_ix != gi_ptr->n_functions; f_ix++) + { + gfi_ptr = gi_ptr->functions[f_ix]; + + if (gfi_ptr && gfi_ptr->key != gi_ptr) + gfi_ptr = 0; + + crc32 = crc32_unsigned (crc32, gfi_ptr ? gfi_ptr->cfg_checksum : 0); + crc32 = crc32_unsigned (crc32, + gfi_ptr ? gfi_ptr->lineno_checksum : 0); + if (!gfi_ptr) + continue; + + ci_ptr = gfi_ptr->ctrs; + for (t_ix = 0; t_ix != GCOV_COUNTERS_SUMMABLE; t_ix++) + { + if (!gi_ptr->merge[t_ix]) + continue; + + cs_ptr = &(this_prg->ctrs[t_ix]); + cs_ptr->num += ci_ptr->num; + crc32 = crc32_unsigned (crc32, ci_ptr->num); + + for (c_num = 0; c_num < ci_ptr->num; c_num++) + { + cs_ptr->sum_all += ci_ptr->values[c_num]; + if (cs_ptr->run_max < ci_ptr->values[c_num]) + cs_ptr->run_max = ci_ptr->values[c_num]; + } + ci_ptr++; + } + } + } + gcov_compute_histogram (this_prg); + return crc32; +} + +/* A struct that bundles all the related information about the + gcda filename. */ +struct gcov_filename_aux{ + char *gi_filename_up; + int gcov_prefix_strip; + size_t prefix_length; +}; + +/* Including system dependent components. */ +#if !defined (__KERNEL__) +#include "libgcov-driver-system.c" +#else +#include "libgcov-driver-kernel.c" +#endif + +static int +scan_build_info (struct gcov_info *gi_ptr) +{ + gcov_unsigned_t i, length; + gcov_unsigned_t num_strings = 0; + char **build_info_strings; + + length = gcov_read_unsigned (); + build_info_strings = gcov_read_build_info (length, &num_strings); + if (!build_info_strings) + { + gcov_error ("profiling:%s:Error reading build info\n", gi_filename); + return -1; + } + if (!gi_ptr->build_info) + { + gcov_error ("profiling:%s:Mismatched build info sections, expected " + "none, found %u strings)\n", gi_filename, num_strings); + return -1; + } + + for (i = 0; i < num_strings; i++) + { + if (strcmp (build_info_strings[i], gi_ptr->build_info[i])) + { + gcov_error ("profiling:%s:Mismatched build info string " + "(expected %s, read %s)\n", + gi_filename, gi_ptr->build_info[i], + build_info_strings[i]); + return -1; + } + xfree (build_info_strings[i]); + } + xfree (build_info_strings); + return 0; +} + +#if !defined(__KERNEL__) +/* Scan through the current open gcda file corresponding to GI_PTR + to locate the end position just before function data should be rewritten, + returned in SUMMARY_END_POS_P. E.g. scan past the last summary and other + sections that won't be rewritten, like the build info. Return 0 on success, + -1 on error. */ +static int +gcov_scan_to_function_data (struct gcov_info *gi_ptr, + gcov_position_t *summary_end_pos_p) +{ + gcov_unsigned_t tag, version, stamp; + tag = gcov_read_unsigned (); + if (tag != GCOV_DATA_MAGIC) + { + gcov_error ("profiling:%s:Not a gcov data file\n", gi_filename); + return -1; + } + + version = gcov_read_unsigned (); + if (!gcov_version (gi_ptr, version, gi_filename)) + return -1; + + stamp = gcov_read_unsigned (); + if (stamp != gi_ptr->stamp) + /* Read from a different compilation. Overwrite the file. */ + return -1; + + /* Look for program summary. */ + while (1) + { + struct gcov_summary tmp; + + *summary_end_pos_p = gcov_position (); + tag = gcov_read_unsigned (); + if (tag != GCOV_TAG_PROGRAM_SUMMARY) + break; + + gcov_read_unsigned (); + gcov_read_summary (&tmp); + if (gcov_is_error ()) + return -1; + } + + /* If there is a build info section, scan past it as well. */ + if (tag == GCOV_TAG_BUILD_INFO) + { + if (scan_build_info (gi_ptr) < 0) + return -1; + + *summary_end_pos_p = gcov_position (); + tag = gcov_read_unsigned (); + } + /* The next section should be the function counters. */ + gcc_assert (tag == GCOV_TAG_FUNCTION); + + return 0; +} +#endif /* __KERNEL__ */ + +/* This function merges counters in GI_PTR to an existing gcda file. + Return 0 on success. + Return -1 on error. In this case, caller will goto read_fatal. */ + +static int +gcov_exit_merge_gcda (struct gcov_info *gi_ptr, + struct gcov_summary *prg_p, + struct gcov_summary *this_prg, + gcov_position_t *summary_pos_p, + gcov_position_t *eof_pos_p, + gcov_unsigned_t crc32) +{ + gcov_unsigned_t tag, length; + unsigned t_ix; + int f_ix; + int error = 0; + struct gcov_fn_buffer **fn_tail = &fn_buffer; + struct gcov_summary_buffer **sum_tail = &sum_buffer; + int *zero_fixup_flags = NULL; + + length = gcov_read_unsigned (); + if (!gcov_version (gi_ptr, length, gi_filename)) + return -1; + + length = gcov_read_unsigned (); + if (length != gi_ptr->stamp) + /* Read from a different compilation. Overwrite the file. */ + return 0; + + /* Look for program summary. */ + for (f_ix = 0;;) + { + struct gcov_summary tmp; + + *eof_pos_p = gcov_position (); + tag = gcov_read_unsigned (); + if (tag != GCOV_TAG_PROGRAM_SUMMARY) + break; + + f_ix--; + length = gcov_read_unsigned (); + gcov_read_summary (&tmp); + if ((error = gcov_is_error ())) + goto read_error; + if (*summary_pos_p) + { + /* Save all summaries after the one that will be + merged into below. These will need to be rewritten + as histogram merging may change the number of non-zero + histogram entries that will be emitted, and thus the + size of the merged summary. */ + (*sum_tail) = (struct gcov_summary_buffer *) + xmalloc (sizeof(struct gcov_summary_buffer)); + (*sum_tail)->summary = tmp; + (*sum_tail)->next = 0; + sum_tail = &((*sum_tail)->next); + goto next_summary; + } + if (tmp.checksum != crc32) + goto next_summary; + + for (t_ix = 0; t_ix != GCOV_COUNTERS_SUMMABLE; t_ix++) + if (tmp.ctrs[t_ix].num != this_prg->ctrs[t_ix].num) + goto next_summary; + *prg_p = tmp; + *summary_pos_p = *eof_pos_p; + + next_summary:; + } + + if (tag == GCOV_TAG_BUILD_INFO) + { + if (scan_build_info (gi_ptr) < 0) + return -1; + + /* Since the stamps matched if we got here, this should be from + the same compilation and the build info strings should match. */ + tag = gcov_read_unsigned (); + } + + if (tag == GCOV_TAG_COMDAT_ZERO_FIXUP) + { + gcov_unsigned_t num_fns = 0; + length = gcov_read_unsigned (); + zero_fixup_flags = gcov_read_comdat_zero_fixup (length, &num_fns); + if (!zero_fixup_flags) + { + gcov_error ("profiling:%s:Error reading zero fixup flags\n", + gi_filename); + return -1; + } + + tag = gcov_read_unsigned (); + } + + /* Merge execution counts for each function. */ + for (f_ix = 0; (unsigned)f_ix != gi_ptr->n_functions; + f_ix++, tag = gcov_read_unsigned ()) + { + const struct gcov_ctr_info *ci_ptr; + const struct gcov_fn_info *gfi_ptr = gi_ptr->functions[f_ix]; + + if (tag != GCOV_TAG_FUNCTION) + goto read_mismatch; + + length = gcov_read_unsigned (); + if (!length) + /* This function did not appear in the other program. + We have nothing to merge. */ + continue; + + if (length != GCOV_TAG_FUNCTION_LENGTH) + goto read_mismatch; + + if (!gfi_ptr || gfi_ptr->key != gi_ptr) + { + /* This function appears in the other program. We + need to buffer the information in order to write + it back out -- we'll be inserting data before + this point, so cannot simply keep the data in the + file. */ + fn_tail = buffer_fn_data (gi_filename, + gi_ptr, fn_tail, f_ix); + if (!fn_tail) + goto read_mismatch; + continue; + } + + if (zero_fixup_flags) + set_gcov_fn_fixed_up (zero_fixup_flags[f_ix]); + + length = gcov_read_unsigned (); + if (length != gfi_ptr->ident) + goto read_mismatch; + + length = gcov_read_unsigned (); + if (length != gfi_ptr->lineno_checksum) + goto read_mismatch; + + length = gcov_read_unsigned (); + if (length != gfi_ptr->cfg_checksum) + goto read_mismatch; + + ci_ptr = gfi_ptr->ctrs; + for (t_ix = 0; t_ix < GCOV_COUNTERS; t_ix++) + { + gcov_merge_fn merge = gi_ptr->merge[t_ix]; + + if (!merge) + continue; + + tag = gcov_read_unsigned (); + length = gcov_read_unsigned (); + if (tag != GCOV_TAG_FOR_COUNTER (t_ix) + || length != GCOV_TAG_COUNTER_LENGTH (ci_ptr->num)) + goto read_mismatch; + (*merge) (ci_ptr->values, ci_ptr->num); + ci_ptr++; + } + if ((error = gcov_is_error ())) + goto read_error; + } + xfree (zero_fixup_flags); + + if (tag && tag != GCOV_TAG_MODULE_INFO) + { + read_mismatch:; + gcov_error ("profiling:%s:Merge mismatch for %s %u\n", + gi_filename, f_ix >= 0 ? "function" : "summary", + f_ix < 0 ? -1 - f_ix : f_ix); + return -1; + } + return 0; + +read_error: + gcov_error ("profiling:%s:%s merging\n", gi_filename, + error < 0 ? "Overflow": "Error"); + return -1; +} + +#if !defined(__KERNEL__) +/* Write NUM_FNS ZERO_COUNTS fixup flags to a gcda file starting from its + current location. */ + +static void +gcov_write_comdat_zero_fixup (char *zero_counts, unsigned num_fns) +{ + unsigned f_ix; + gcov_unsigned_t len = GCOV_TAG_COMDAT_ZERO_FIXUP_LENGTH (num_fns); + gcov_unsigned_t bitvector = 0, b_ix = 0; + gcov_write_tag_length (GCOV_TAG_COMDAT_ZERO_FIXUP, len); + + gcov_write_unsigned (num_fns); + for (f_ix = 0; f_ix != num_fns; f_ix++) + { + if (zero_counts[f_ix]) + bitvector |= 1 << b_ix; + if (++b_ix == 32) + { + gcov_write_unsigned (bitvector); + b_ix = 0; + bitvector = 0; + } + } + if (b_ix > 0) + gcov_write_unsigned (bitvector); +} +#endif /* __KERNEL__ */ + +/* Write build_info strings from GI_PTR to a gcda file starting from its current + location. */ + +static void +gcov_write_build_info (struct gcov_info *gi_ptr) +{ + gcov_unsigned_t num = 0; + gcov_unsigned_t len = 1; + + if (!gi_ptr->build_info) + return; + + /* Count the number of strings, which is terminated with an empty string. */ + while (gi_ptr->build_info[num][0]) + num++; + + len += gcov_compute_string_array_len (gi_ptr->build_info, num); + gcov_write_tag_length (GCOV_TAG_BUILD_INFO, len); + gcov_write_unsigned (num); + gcov_write_string_array (gi_ptr->build_info, num); +} + +/* Write counters in GI_PTR to a gcda file starting from its current + location. */ + +static void +gcov_write_func_counters (struct gcov_info *gi_ptr) +{ + unsigned f_ix; + + /* Write execution counts for each function. */ + for (f_ix = 0; f_ix != gi_ptr->n_functions; f_ix++) + { + unsigned buffered = 0; + const struct gcov_fn_info *gfi_ptr; + const struct gcov_ctr_info *ci_ptr; + gcov_unsigned_t length; + unsigned t_ix; + + if (fn_buffer && fn_buffer->fn_ix == f_ix) + { + /* Buffered data from another program. */ + buffered = 1; + gfi_ptr = &fn_buffer->info; + length = GCOV_TAG_FUNCTION_LENGTH; + } + else + { + gfi_ptr = gi_ptr->functions[f_ix]; + if (gfi_ptr && gfi_ptr->key == gi_ptr) + length = GCOV_TAG_FUNCTION_LENGTH; + else + length = 0; + } + + gcov_write_tag_length (GCOV_TAG_FUNCTION, length); + if (!length) + continue; + + gcov_write_unsigned (gfi_ptr->ident); + gcov_write_unsigned (gfi_ptr->lineno_checksum); + gcov_write_unsigned (gfi_ptr->cfg_checksum); + + ci_ptr = gfi_ptr->ctrs; + for (t_ix = 0; t_ix < GCOV_COUNTERS; t_ix++) + { + gcov_unsigned_t n_counts; + gcov_type *c_ptr; + + if (!gi_ptr->merge[t_ix]) + continue; + + n_counts = ci_ptr->num; + gcov_write_tag_length (GCOV_TAG_FOR_COUNTER (t_ix), + GCOV_TAG_COUNTER_LENGTH (n_counts)); + c_ptr = ci_ptr->values; + while (n_counts--) + gcov_write_counter (*c_ptr++); + ci_ptr++; + } +#if !defined(__KERNEL__) + if (buffered) + fn_buffer = free_fn_data (gi_ptr, fn_buffer, GCOV_COUNTERS); +#endif /* __KERNEL__ */ + } + + gi_ptr->eof_pos = gcov_position (); + gcov_write_unsigned (0); +} + +/* Write counters in GI_PTR and the summary in PRG to a gcda file. In + the case of appending to an existing file, SUMMARY_POS will be non-zero. + We will write the file starting from SUMMAY_POS. */ + +static void +gcov_exit_write_gcda (struct gcov_info *gi_ptr, + const struct gcov_summary *prg_p, + const gcov_position_t eof_pos, + const gcov_position_t summary_pos) + +{ + struct gcov_summary_buffer *next_sum_buffer; + + /* Write out the data. */ + if (!eof_pos) + { + gcov_write_tag_length (GCOV_DATA_MAGIC, GCOV_VERSION); + gcov_write_unsigned (gi_ptr->stamp); + } + + if (summary_pos) + gcov_seek (summary_pos); + gcc_assert (!summary_pos || summary_pos == gcov_position ()); + + /* Generate whole program statistics. */ + gcov_write_summary (GCOV_TAG_PROGRAM_SUMMARY, prg_p); + + /* Rewrite all the summaries that were after the summary we merged + into. This is necessary as the merged summary may have a different + size due to the number of non-zero histogram entries changing after + merging. */ + + while (sum_buffer) + { + gcov_write_summary (GCOV_TAG_PROGRAM_SUMMARY, &sum_buffer->summary); + next_sum_buffer = sum_buffer->next; + xfree (sum_buffer); + sum_buffer = next_sum_buffer; + } + + gcov_write_build_info (gi_ptr); + + /* Write the counters. */ + gcov_write_func_counters (gi_ptr); +} + +/* Helper function for merging summary. + Return -1 on error. Return 0 on success. */ + +static int +gcov_exit_merge_summary (const struct gcov_info *gi_ptr, struct gcov_summary *prg, + struct gcov_summary *this_prg, gcov_unsigned_t crc32, + struct gcov_summary *all_prg __attribute__ ((unused))) +{ + struct gcov_ctr_summary *cs_prg, *cs_tprg; + unsigned t_ix; +#if !GCOV_LOCKED + /* summary for all instances of program. */ + struct gcov_ctr_summary *cs_all; +#endif + + /* Merge the summaries. */ + for (t_ix = 0; t_ix < GCOV_COUNTERS_SUMMABLE; t_ix++) + { + cs_prg = &(prg->ctrs[t_ix]); + cs_tprg = &(this_prg->ctrs[t_ix]); + + if (gi_ptr->merge[t_ix]) + { + int first = !cs_prg->runs; + + if (!run_accounted) + cs_prg->runs++; + if (first) + cs_prg->num = cs_tprg->num; + cs_prg->sum_all += cs_tprg->sum_all; + if (cs_prg->run_max < cs_tprg->run_max) + cs_prg->run_max = cs_tprg->run_max; + cs_prg->sum_max += cs_tprg->run_max; + if (first) + memcpy (cs_prg->histogram, cs_tprg->histogram, + sizeof (gcov_bucket_type) * GCOV_HISTOGRAM_SIZE); + else + gcov_histogram_merge (cs_prg->histogram, cs_tprg->histogram); + } + else if (cs_prg->runs) + { + gcov_error ("profiling:%s:Merge mismatch for summary.\n", + gi_filename); + return -1; + } +#if !GCOV_LOCKED + cs_all = &all_prg->ctrs[t_ix]; + if (!cs_all->runs && cs_prg->runs) + { + cs_all->num = cs_prg->num; + cs_all->runs = cs_prg->runs; + cs_all->sum_all = cs_prg->sum_all; + cs_all->run_max = cs_prg->run_max; + cs_all->sum_max = cs_prg->sum_max; + } + else if (!all_prg->checksum + /* Don't compare the histograms, which may have slight + variations depending on the order they were updated + due to the truncating integer divides used in the + merge. */ + && (cs_all->num != cs_prg->num + || cs_all->runs != cs_prg->runs + || cs_all->sum_all != cs_prg->sum_all + || cs_all->run_max != cs_prg->run_max + || cs_all->sum_max != cs_prg->sum_max)) + { + gcov_error ("profiling:%s:Data file mismatch - some " + "data files may have been concurrently " + "updated without locking support\n", gi_filename); + all_prg->checksum = ~0u; + } +#endif + } + + prg->checksum = crc32; + + return 0; +} + +__attribute__((weak)) gcov_unsigned_t __gcov_lipo_sampling_period; + +/* Sort N entries in VALUE_ARRAY in descending order. + Each entry in VALUE_ARRAY has two values. The sorting + is based on the second value. */ + +GCOV_LINKAGE void +gcov_sort_n_vals (gcov_type *value_array, int n) +{ + int j, k; + for (j = 2; j < n; j += 2) + { + gcov_type cur_ent[2]; + cur_ent[0] = value_array[j]; + cur_ent[1] = value_array[j + 1]; + k = j - 2; + while (k >= 0 && value_array[k + 1] < cur_ent[1]) + { + value_array[k + 2] = value_array[k]; + value_array[k + 3] = value_array[k+1]; + k -= 2; + } + value_array[k + 2] = cur_ent[0]; + value_array[k + 3] = cur_ent[1]; + } +} + +/* Sort the profile counters for all indirect call sites. Counters + for each call site are allocated in array COUNTERS. */ + +static void +gcov_sort_icall_topn_counter (const struct gcov_ctr_info *counters) +{ + int i; + gcov_type *values; + int n = counters->num; + gcc_assert (!(n % GCOV_ICALL_TOPN_NCOUNTS)); + + values = counters->values; + + for (i = 0; i < n; i += GCOV_ICALL_TOPN_NCOUNTS) + { + gcov_type *value_array = &values[i + 1]; + gcov_sort_n_vals (value_array, GCOV_ICALL_TOPN_NCOUNTS - 1); + } +} + +static void +gcov_sort_topn_counter_arrays (const struct gcov_info *gi_ptr) +{ + unsigned int i; + int f_ix; + const struct gcov_fn_info *gfi_ptr; + const struct gcov_ctr_info *ci_ptr; + + for (f_ix = 0; (unsigned)f_ix != gi_ptr->n_functions; f_ix++) + { + gfi_ptr = gi_ptr->functions[f_ix]; + ci_ptr = gfi_ptr->ctrs; + for (i = 0; i < GCOV_COUNTERS; i++) + { + if (!gcov_counter_active (gi_ptr, i)) + continue; + if (i == GCOV_COUNTER_ICALL_TOPNV) + { + gcov_sort_icall_topn_counter (ci_ptr); + break; + } + ci_ptr++; + } + } +} + +/* Scaling LIPO sampled profile counters. */ +static void +gcov_scaling_lipo_counters (const struct gcov_info *gi_ptr) +{ + unsigned int i,j,k; + int f_ix; + const struct gcov_fn_info *gfi_ptr; + const struct gcov_ctr_info *ci_ptr; + + if (__gcov_lipo_sampling_period <= 1) + return; + + for (f_ix = 0; (unsigned)f_ix != gi_ptr->n_functions; f_ix++) + { + gfi_ptr = gi_ptr->functions[f_ix]; + ci_ptr = gfi_ptr->ctrs; + for (i = 0; i < GCOV_COUNTERS; i++) + { + if (!gcov_counter_active (gi_ptr, i)) + continue; + if (i == GCOV_COUNTER_ICALL_TOPNV) + { + for (j = 0; j < ci_ptr->num; j += GCOV_ICALL_TOPN_NCOUNTS) + for (k = 2; k < GCOV_ICALL_TOPN_NCOUNTS; k += 2) + ci_ptr->values[j+k] *= __gcov_lipo_sampling_period; + } + if (i == GCOV_COUNTER_DIRECT_CALL) + { + for (j = 0; j < ci_ptr->num; j += 2) + ci_ptr->values[j+1] *= __gcov_lipo_sampling_period; + } + ci_ptr++; + } + } +} + +/* Open a gcda file specified by GI_FILENAME. + Return -1 on error. Return 0 on success. */ + +static int +gcov_exit_open_gcda_file (struct gcov_info *gi_ptr, struct gcov_filename_aux *gf) +{ + int gcov_prefix_strip; + size_t prefix_length; + char *gi_filename_up; + + gcov_prefix_strip = gf->gcov_prefix_strip; + gi_filename_up = gf->gi_filename_up; + prefix_length = gf->prefix_length; + + gcov_strip_leading_dirs (prefix_length, gcov_prefix_strip, gi_ptr->filename, + gi_filename_up); + + return gcov_open_by_filename (gi_filename); +} + +/* Dump the coverage counts for one gcov_info object. We merge with existing + counts when possible, to avoid growing the .da files ad infinitum. We use + this program's checksum to make sure we only accumulate whole program + statistics to the correct summary. An object file might be embedded + in two separate programs, and we must keep the two program + summaries separate. */ + +static void +gcov_exit_dump_gcov (struct gcov_info *gi_ptr, struct gcov_filename_aux *gf, + gcov_unsigned_t crc32, struct gcov_summary *all_prg, + struct gcov_summary *this_prg) +{ +/* We have to make the decl static as kernel has limited stack size. + If we put prg to stack, we will running into nasty stack overflow. */ +#if defined(__KERNEL__) + static +#endif + struct gcov_summary prg; /* summary for this object over all program. */ + int error; + gcov_unsigned_t tag = 0; + gcov_position_t summary_pos = 0; + gcov_position_t eof_pos = 0; + + fn_buffer = 0; + sum_buffer = 0; + + gcov_sort_topn_counter_arrays (gi_ptr); + gcov_scaling_lipo_counters (gi_ptr); + + error = gcov_exit_open_gcda_file (gi_ptr, gf); + if (error == -1) + return; + +#if !defined(__KERNEL__) + tag = gcov_read_unsigned (); +#endif + if (tag) + { + /* Merge data from file. */ + if (tag != GCOV_DATA_MAGIC) + { + gcov_error ("profiling:%s:Not a gcov data file\n", gi_filename); + goto read_fatal; + } + error = gcov_exit_merge_gcda (gi_ptr, &prg, this_prg, &summary_pos, &eof_pos, + crc32); + if (error == -1) + goto read_fatal; + } + + gcov_rewrite (); + + if (!summary_pos) + { + memset (&prg, 0, sizeof (prg)); + summary_pos = eof_pos; + } + + error = gcov_exit_merge_summary (gi_ptr, &prg, this_prg, crc32, all_prg); + if (error == -1) + goto read_fatal; + + gcov_exit_write_gcda (gi_ptr, &prg, eof_pos, summary_pos); + /* fall through */ + +read_fatal:; +#if !defined(__KERNEL__) + while (fn_buffer) + fn_buffer = free_fn_data (gi_ptr, fn_buffer, GCOV_COUNTERS); +#else + + /* In LIPO mode, dump the primary module info. */ + if (gi_ptr->mod_info && gi_ptr->mod_info->is_primary) + { + /* Overwrite the zero word at the of the file. */ + gcov_seek (gi_ptr->eof_pos); + gcov_write_module_info (gi_ptr, 1); + /* Write the end marker */ + gcov_write_unsigned (0); + } +#endif + + if ((error = gcov_close ())) + gcov_error (error < 0 ? + "profiling:%s:Overflow writing\n" : + "profiling:%s:Error writing\n", + gi_filename); +} + +#if !defined (__KERNEL__) +/* Write imported files (auxiliary modules) for primary module GI_PTR + into file GI_FILENAME. */ + +static void +gcov_write_import_file (char *gi_filename, struct gcov_info *gi_ptr) +{ + char *gi_imports_filename; + const char *gcov_suffix; + FILE *imports_file; + size_t prefix_length, suffix_length; + + gcov_suffix = getenv ("GCOV_IMPORTS_SUFFIX"); + if (!gcov_suffix || !strlen (gcov_suffix)) + gcov_suffix = ".imports"; + suffix_length = strlen (gcov_suffix); + prefix_length = strlen (gi_filename); + gi_imports_filename = (char *) alloca (prefix_length + suffix_length + 1); + memset (gi_imports_filename, 0, prefix_length + suffix_length + 1); + memcpy (gi_imports_filename, gi_filename, prefix_length); + memcpy (gi_imports_filename + prefix_length, gcov_suffix, suffix_length); + imports_file = fopen (gi_imports_filename, "w"); + if (imports_file) + { + const struct dyn_imp_mod **imp_mods; + unsigned i, imp_len; + imp_mods = gcov_get_sorted_import_module_array (gi_ptr, &imp_len); + if (imp_mods) + { + for (i = 0; i < imp_len; i++) + { + fprintf (imports_file, "%s\n", + imp_mods[i]->imp_mod->mod_info->source_filename); + fprintf (imports_file, "%s%s\n", + imp_mods[i]->imp_mod->mod_info->da_filename, GCOV_DATA_SUFFIX); + } + xfree (imp_mods); + } + fclose (imports_file); + } +} + +static void +gcov_dump_module_info (struct gcov_filename_aux *gf) +{ + struct gcov_info *gi_ptr; + + unsigned max_module_id = 0; + for (gi_ptr = __gcov_list; gi_ptr; gi_ptr = gi_ptr->next) + { + unsigned mod_id = gi_ptr->mod_info->ident; + if (max_module_id < mod_id) + max_module_id = mod_id; + } + char **zero_counts = (char **) xcalloc (max_module_id, sizeof (char *)); + for (gi_ptr = __gcov_list; gi_ptr; gi_ptr = gi_ptr->next) + { + unsigned mod_id = gi_ptr->mod_info->ident; + zero_counts[mod_id-1] = (char *) xcalloc (gi_ptr->n_functions, + sizeof (char)); + } + + /* Compute the module groups and record whether there were any + counter fixups applied that require rewriting the counters. */ + int changed = __gcov_compute_module_groups (zero_counts); + + /* Now write out module group info. */ + for (gi_ptr = __gcov_list; gi_ptr; gi_ptr = gi_ptr->next) + { + int error; + + if (gcov_exit_open_gcda_file (gi_ptr, gf) == -1) + continue; + + if (changed) + { + /* Scan file to find the start of the function section, which is + where we will start re-writing the counters. */ + gcov_position_t summary_end_pos; + if (gcov_scan_to_function_data (gi_ptr, &summary_end_pos) == -1) + gcov_error ("profiling:%s:Error scanning summaries\n", + gi_filename); + else + { + gcov_position_t eof_pos = gi_ptr->eof_pos; + gcov_rewrite (); + gcov_seek (summary_end_pos); + + unsigned mod_id = gi_ptr->mod_info->ident; + gcov_write_comdat_zero_fixup (zero_counts[mod_id-1], + gi_ptr->n_functions); + gcov_position_t zero_fixup_eof_pos = gcov_position (); + + gcov_write_func_counters (gi_ptr); + gcc_assert (eof_pos + (zero_fixup_eof_pos - summary_end_pos) + == gi_ptr->eof_pos); + } + } + else + gcov_rewrite (); + + /* Overwrite the zero word at the of the file. */ + gcov_seek (gi_ptr->eof_pos); + + gcov_write_module_infos (gi_ptr); + /* Write the end marker */ + gcov_write_unsigned (0); + gcov_truncate (); + + if ((error = gcov_close ())) + gcov_error (error < 0 ? "profiling:%s:Overflow writing\n" : + "profiling:%s:Error writing\n", + gi_filename); + gcov_write_import_file (gi_filename, gi_ptr); + free (zero_counts[gi_ptr->mod_info->ident-1]); + } + + free (zero_counts); + + __gcov_finalize_dyn_callgraph (); +} + +/* Dump all the coverage counts for the program. It first computes program + summary and then traverses gcov_list list and dumps the gcov_info + objects one by one. */ + +void +gcov_exit (void) +{ + struct gcov_info *gi_ptr; + struct gcov_filename_aux gf; + gcov_unsigned_t crc32; + int dump_module_info = 0; + struct gcov_summary all_prg; + struct gcov_summary this_prg; + + /* Prevent the counters from being dumped a second time on exit when the + application already wrote out the profile using __gcov_dump(). */ + if (gcov_dump_complete) + return; + + crc32 = gcov_exit_compute_summary (&this_prg); + + allocate_filename_struct (&gf); +#if !GCOV_LOCKED + memset (&all_prg, 0, sizeof (all_prg)); +#endif + + /* Now merge each file. */ + for (gi_ptr = __gcov_list; gi_ptr; gi_ptr = gi_ptr->next) + { + gcov_exit_dump_gcov (gi_ptr, &gf, crc32, &all_prg, &this_prg); + + /* The IS_PRIMARY field is overloaded to indicate if this module + is FDO/LIPO. */ + if (gi_ptr->mod_info) + dump_module_info |= gi_ptr->mod_info->is_primary; + } + run_accounted = 1; + + if (dump_module_info) + gcov_dump_module_info (&gf); + + if (gi_filename) + xfree (gi_filename); +} + +/* Add a new object file onto the bb chain. Invoked automatically + when running an object file's global ctors. */ + +void +__gcov_init (struct gcov_info *info) +{ +#ifndef IN_GCOV_TOOL + if (!gcov_sampling_period_initialized) + { + const char* env_value_str = getenv ("GCOV_SAMPLING_PERIOD"); + if (env_value_str) + { + int env_value_int = atoi(env_value_str); + if (env_value_int >= 1) + __gcov_sampling_period = env_value_int; + } + env_value_str = getenv ("GCOV_LIPO_SAMPLING_PERIOD"); + if (env_value_str) + { + int env_value_int = atoi(env_value_str); + if (env_value_int >= 0) + __gcov_lipo_sampling_period = env_value_int; + } + gcov_sampling_period_initialized = 1; + } +#endif + + if (!info->version || !info->n_functions) + return; + if (gcov_version (info, info->version, 0)) + { + size_t filename_length = strlen(info->filename); + + /* Refresh the longest file name information */ + if (filename_length > gcov_max_filename) + gcov_max_filename = filename_length; + + /* Assign the module ID (starting at 1). */ + info->mod_info->ident = (++gcov_cur_module_id); + gcc_assert (EXTRACT_MODULE_ID_FROM_GLOBAL_ID (GEN_FUNC_GLOBAL_ID ( + info->mod_info->ident, 0)) + == info->mod_info->ident); + + if (!__gcov_list) + atexit (gcov_exit); + + info->next = __gcov_list; + __gcov_list = info; + } + info->version = 0; +} + +#else /* __KERNEL__ */ + +static struct gcov_filename_aux gf; +static gcov_unsigned_t crc32; +static struct gcov_summary all_prg; +static struct gcov_summary this_prg; +void +gcov_kernel_dump_gcov_init (void) +{ + crc32 = gcov_exit_compute_summary (&this_prg); + allocate_filename_struct (&gf); + memset (&all_prg, 0, sizeof (all_prg)); +} + +void +gcov_kernel_dump_one_gcov(struct gcov_info *info) +{ + gcov_exit_dump_gcov (info, &gf, crc32, &all_prg, &this_prg); +} + +#endif /* __KERNEL__ */ + +/* Reset all counters to zero. */ + +void +gcov_clear (void) +{ + const struct gcov_info *gi_ptr; + + for (gi_ptr = __gcov_list; gi_ptr; gi_ptr = gi_ptr->next) + { + unsigned f_ix; + + for (f_ix = 0; f_ix < gi_ptr->n_functions; f_ix++) + { + unsigned t_ix; + const struct gcov_fn_info *gfi_ptr = gi_ptr->functions[f_ix]; + const struct gcov_ctr_info *ci_ptr; + + if (!gfi_ptr || gfi_ptr->key != gi_ptr) + continue; + ci_ptr = gfi_ptr->ctrs; + for (t_ix = 0; t_ix != GCOV_COUNTERS; t_ix++) + { + if (!gi_ptr->merge[t_ix]) + continue; + + memset (ci_ptr->values, 0, sizeof (gcov_type) * ci_ptr->num); + ci_ptr++; + } + } + } +} + +/* Write out MOD_INFO into the gcda file. IS_PRIMARY is a flag + indicating if the module is the primary module in the group. */ + +void +gcov_write_module_info (const struct gcov_info *mod_info, + unsigned is_primary) +{ + gcov_unsigned_t len = 0, filename_len = 0, src_filename_len = 0, i; + gcov_unsigned_t num_strings; + gcov_unsigned_t *aligned_fname; + struct gcov_module_info *module_info = mod_info->mod_info; + filename_len = (strlen (module_info->da_filename) + + sizeof (gcov_unsigned_t)) / sizeof (gcov_unsigned_t); + src_filename_len = (strlen (module_info->source_filename) + + sizeof (gcov_unsigned_t)) / sizeof (gcov_unsigned_t); + len = filename_len + src_filename_len; + len += 2; /* each name string is led by a length. */ + + num_strings = module_info->num_quote_paths + module_info->num_bracket_paths + + module_info->num_system_paths + + module_info->num_cpp_defines + module_info->num_cpp_includes + + module_info->num_cl_args; + len += gcov_compute_string_array_len (module_info->string_array, + num_strings); + + len += 11; /* 11 more fields */ + + gcov_write_tag_length (GCOV_TAG_MODULE_INFO, len); + gcov_write_unsigned (module_info->ident); + gcov_write_unsigned (is_primary); + gcov_write_unsigned (module_info->flags); + gcov_write_unsigned (module_info->lang); + gcov_write_unsigned (module_info->ggc_memory); + gcov_write_unsigned (module_info->num_quote_paths); + gcov_write_unsigned (module_info->num_bracket_paths); + gcov_write_unsigned (module_info->num_system_paths); + gcov_write_unsigned (module_info->num_cpp_defines); + gcov_write_unsigned (module_info->num_cpp_includes); + gcov_write_unsigned (module_info->num_cl_args); + + /* Now write the filenames */ + aligned_fname = (gcov_unsigned_t *) alloca ((filename_len + src_filename_len + 2) * + sizeof (gcov_unsigned_t)); + memset (aligned_fname, 0, + (filename_len + src_filename_len + 2) * sizeof (gcov_unsigned_t)); + aligned_fname[0] = filename_len; + strcpy ((char*) (aligned_fname + 1), module_info->da_filename); + aligned_fname[filename_len + 1] = src_filename_len; + strcpy ((char*) (aligned_fname + filename_len + 2), module_info->source_filename); + + for (i = 0; i < (filename_len + src_filename_len + 2); i++) + gcov_write_unsigned (aligned_fname[i]); + + /* Now write the string array. */ + gcov_write_string_array (module_info->string_array, num_strings); +} + +#endif /* L_gcov */ +#endif /* inhibit_libc */
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/libgcov-kernel.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/libgcov-kernel.h new file mode 100644 index 0000000..b44af53 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/libgcov-kernel.h
@@ -0,0 +1,121 @@ +/* Header file for libgcov-*.c. + Copyright (C) 1996-2014 Free Software Foundation, Inc. + + This file is part of GCC. + + GCC is free software; you can redistribute it and/or modify it under + the terms of the GNU General Public License as published by the Free + Software Foundation; either version 3, or (at your option) any later + version. + + GCC is distributed in the hope that it will be useful, but WITHOUT ANY + WARRANTY; without even the implied warranty of MERCHANTABILITY or + FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License + for more details. + + Under Section 7 of GPL version 3, you are granted additional + permissions described in the GCC Runtime Library Exception, version + 3.1, as published by the Free Software Foundation. + + You should have received a copy of the GNU General Public License and + a copy of the GCC Runtime Library Exception along with this program; + see the files COPYING3 and COPYING.RUNTIME respectively. If not, see + <http://www.gnu.org/licenses/>. */ + +#ifndef GCC_LIBGCOV_KERNEL_H +#define GCC_LIBGCOV_KERNEL_H + +/* work around the poisoned malloc/calloc in system.h. */ +#ifndef xmalloc +#define xmalloc vmalloc +#endif +#ifndef xcalloc +#define xcalloc vcalloc +#endif +#ifndef xrealloc +#define xrealloc vrealloc +#endif +#ifndef xfree +#define xfree vfree +#endif +#ifndef alloca +#define alloca __builtin_alloca +#endif + +#ifndef SEEK_SET +#define SEEK_SET 0 +#endif + + /* Define MACROs to be used by kernel compilation. */ +# define L_gcov +# define L_gcov_interval_profiler +# define L_gcov_pow2_profiler +# define L_gcov_one_value_profiler +# define L_gcov_indirect_call_profiler_v2 +# define L_gcov_direct_call_profiler +# define L_gcov_indirect_call_profiler +# define L_gcov_indirect_call_topn_profiler +# define L_gcov_time_profiler +# define L_gcov_average_profiler +# define L_gcov_ior_profiler +# define L_gcov_merge_add +# define L_gcov_merge_single +# define L_gcov_merge_delta +# define L_gcov_merge_ior +# define L_gcov_merge_time_profile +# define L_gcov_merge_icall_topn +# define L_gcov_merge_dc + +# define IN_LIBGCOV 1 +# define IN_GCOV 0 +#define THREAD_PREFIX +#define GCOV_LINKAGE /* nothing */ +#define BITS_PER_UNIT 8 +#define LONG_LONG_TYPE_SIZE 64 +#define MEMMODEL_RELAXED 0 + +#define ENABLE_ASSERT_CHECKING 1 + +/* gcc_assert() prints out a warning if the check fails. It + will not abort. */ +#if ENABLE_ASSERT_CHECKING +# define gcc_assert(EXPR) \ + ((void)(!(EXPR) ? printk (KERN_WARNING \ + "GCOV assertion fails: func=%s line=%d\n", \ + __FUNCTION__, __LINE__), 0 : 0)) +#else +# define gcc_assert(EXPR) ((void)(0 && (EXPR))) +#endif + +/* In Linux kernel mode, a virtual file is used for file operations. */ +struct gcov_info; +typedef struct { + long size; /* size of buf */ + long count; /* element written into buf */ + struct gcov_info *info; + char *buf; +} gcov_kernel_vfile; + +#define _GCOV_FILE gcov_kernel_vfile + +/* Wrappers to the file operations. */ +#define _GCOV_fclose kernel_file_fclose +#define _GCOV_ftell kernel_file_ftell +#define _GCOV_fseek kernel_file_fseek +#define _GCOV_ftruncate kernel_file_ftruncate +#define _GCOV_fread kernel_file_fread +#define _GCOV_fwrite kernel_file_fwrite +#define _GCOV_fileno kernel_file_fileno + +/* Declarations for virtual files operations. */ +extern int kernel_file_fclose (gcov_kernel_vfile *); +extern long kernel_file_ftell (gcov_kernel_vfile *); +extern int kernel_file_fseek (gcov_kernel_vfile *, long, int); +extern int kernel_file_ftruncate (gcov_kernel_vfile *, off_t); +extern int kernel_file_fread (void *, size_t, size_t, + gcov_kernel_vfile *); +extern int kernel_file_fwrite (const void *, size_t, size_t, + gcov_kernel_vfile *); +extern int kernel_file_fileno (gcov_kernel_vfile *); + +#endif /* GCC_LIBGCOV_KERNEL_H */
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/libgcov-merge.c b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/libgcov-merge.c new file mode 100644 index 0000000..997dab3 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/libgcov-merge.c
@@ -0,0 +1,299 @@ +/* Routines required for instrumenting a program. */ +/* Compile this one with gcc. */ +/* Copyright (C) 1989-2014 Free Software Foundation, Inc. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify it under +the terms of the GNU General Public License as published by the Free +Software Foundation; either version 3, or (at your option) any later +version. + +GCC is distributed in the hope that it will be useful, but WITHOUT ANY +WARRANTY; without even the implied warranty of MERCHANTABILITY or +FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License +for more details. + +Under Section 7 of GPL version 3, you are granted additional +permissions described in the GCC Runtime Library Exception, version +3.1, as published by the Free Software Foundation. + +You should have received a copy of the GNU General Public License and +a copy of the GCC Runtime Library Exception along with this program; +see the files COPYING3 and COPYING.RUNTIME respectively. If not, see +<http://www.gnu.org/licenses/>. */ + +#include "libgcov.h" + +#if defined(inhibit_libc) +/* If libc and its header files are not available, provide dummy functions. */ + +#ifdef L_gcov_merge_add +void __gcov_merge_add (gcov_type *counters __attribute__ ((unused)), + unsigned n_counters __attribute__ ((unused))) {} +#endif + +#ifdef L_gcov_merge_single +void __gcov_merge_single (gcov_type *counters __attribute__ ((unused)), + unsigned n_counters __attribute__ ((unused))) {} +#endif + +#ifdef L_gcov_merge_delta +void __gcov_merge_delta (gcov_type *counters __attribute__ ((unused)), + unsigned n_counters __attribute__ ((unused))) {} +#endif + +#else + +#ifdef L_gcov_merge_add +/* The profile merging function that just adds the counters. It is given + an array COUNTERS of N_COUNTERS old counters and it reads the same number + of counters from the gcov file. */ +void +__gcov_merge_add (gcov_type *counters, unsigned n_counters) +{ + for (; n_counters; counters++, n_counters--) + *counters += gcov_get_counter (); +} +#endif /* L_gcov_merge_add */ + +#ifdef L_gcov_merge_ior +/* The profile merging function that just adds the counters. It is given + an array COUNTERS of N_COUNTERS old counters and it reads the same number + of counters from the gcov file. */ +void +__gcov_merge_ior (gcov_type *counters, unsigned n_counters) +{ + for (; n_counters; counters++, n_counters--) + *counters |= gcov_get_counter_target (); +} +#endif + + +#ifdef L_gcov_merge_dc + +/* Returns 1 if the function global id GID is not valid. */ + +static int +__gcov_is_gid_insane (gcov_type gid) +{ + if (EXTRACT_MODULE_ID_FROM_GLOBAL_ID (gid) == 0 + || EXTRACT_FUNC_ID_FROM_GLOBAL_ID (gid) == 0) + return 1; + return 0; +} + +/* The profile merging function used for merging direct call counts + This function is given array COUNTERS of N_COUNTERS old counters and it + reads the same number of counters from the gcov file. */ + +void +__gcov_merge_dc (gcov_type *counters, unsigned n_counters) +{ + unsigned i; + + gcc_assert (!(n_counters % 2)); + for (i = 0; i < n_counters; i += 2) + { + gcov_type global_id = gcov_get_counter_target (); + gcov_type call_count = gcov_get_counter (); + + /* Note that global id counter may never have been set if no calls were + made from this call-site. */ + if (counters[i] && global_id) + { + /* TODO race condition requires us do the following correction. */ + if (__gcov_is_gid_insane (counters[i])) + counters[i] = global_id; + else if (__gcov_is_gid_insane (global_id)) + global_id = counters[i]; + +#if !defined(__KERNEL__) + /* In the case of inconsistency, use the src's target. */ + if (counters[i] != global_id) + fprintf (stderr, "Warning: Inconsistent call targets in" + " direct-call profile.\n"); +#endif + } + else if (global_id) + counters[i] = global_id; + + counters[i + 1] += call_count; + + /* Reset. */ + if (__gcov_is_gid_insane (counters[i])) + counters[i] = counters[i + 1] = 0; + + /* Assert that the invariant (global_id == 0) <==> (call_count == 0) + holds true after merging. */ + if (counters[i] == 0) + counters[i+1] = 0; + if (counters[i + 1] == 0) + counters[i] = 0; + } +} +#endif + + +#ifdef L_gcov_merge_icall_topn +/* The profile merging function used for merging indirect call counts + This function is given array COUNTERS of N_COUNTERS old counters and it + reads the same number of counters from the gcov file. */ + +void +__gcov_merge_icall_topn (gcov_type *counters, unsigned n_counters) +{ + unsigned i, j, k, m; + + gcc_assert (!(n_counters % GCOV_ICALL_TOPN_NCOUNTS)); + for (i = 0; i < n_counters; i += GCOV_ICALL_TOPN_NCOUNTS) + { + gcov_type *value_array = &counters[i + 1]; + unsigned tmp_size = 2 * (GCOV_ICALL_TOPN_NCOUNTS - 1); + gcov_type *tmp_array + = (gcov_type *) alloca (tmp_size * sizeof (gcov_type)); + + for (j = 0; j < tmp_size; j++) + tmp_array[j] = 0; + + for (j = 0; j < GCOV_ICALL_TOPN_NCOUNTS - 1; j += 2) + { + tmp_array[j] = value_array[j]; + tmp_array[j + 1] = value_array [j + 1]; + } + + /* Skip the number_of_eviction entry. */ + gcov_get_counter (); + for (k = 0; k < GCOV_ICALL_TOPN_NCOUNTS - 1; k += 2) + { + int found = 0; + gcov_type global_id = gcov_get_counter_target (); + gcov_type call_count = gcov_get_counter (); + for (m = 0; m < j; m += 2) + { + if (tmp_array[m] == global_id) + { + found = 1; + tmp_array[m + 1] += call_count; + break; + } + } + if (!found) + { + tmp_array[j] = global_id; + tmp_array[j + 1] = call_count; + j += 2; + } + } + /* Now sort the temp array */ + gcov_sort_n_vals (tmp_array, j); + + /* Now copy back the top half of the temp array */ + for (k = 0; k < GCOV_ICALL_TOPN_NCOUNTS - 1; k += 2) + { + value_array[k] = tmp_array[k]; + value_array[k + 1] = tmp_array[k + 1]; + } + } +} +#endif + + +#ifdef L_gcov_merge_time_profile +/* Time profiles are merged so that minimum from all valid (greater than zero) + is stored. There could be a fork that creates new counters. To have + the profile stable, we chosen to pick the smallest function visit time. */ +void +__gcov_merge_time_profile (gcov_type *counters, unsigned n_counters) +{ + unsigned int i; + gcov_type value; + + for (i = 0; i < n_counters; i++) + { + value = gcov_get_counter_target (); + + if (value && (!counters[i] || value < counters[i])) + counters[i] = value; + } +} +#endif /* L_gcov_merge_time_profile */ + +#ifdef L_gcov_merge_single +/* The profile merging function for choosing the most common value. + It is given an array COUNTERS of N_COUNTERS old counters and it + reads the same number of counters from the gcov file. The counters + are split into 3-tuples where the members of the tuple have + meanings: + + -- the stored candidate on the most common value of the measured entity + -- counter + -- total number of evaluations of the value */ +void +__gcov_merge_single (gcov_type *counters, unsigned n_counters) +{ + unsigned i, n_measures; + gcov_type value, counter, all; + + gcc_assert (!(n_counters % 3)); + n_measures = n_counters / 3; + for (i = 0; i < n_measures; i++, counters += 3) + { + value = gcov_get_counter_target (); + counter = gcov_get_counter (); + all = gcov_get_counter (); + + if (counters[0] == value) + counters[1] += counter; + else if (counter > counters[1]) + { + counters[0] = value; + counters[1] = counter - counters[1]; + } + else + counters[1] -= counter; + counters[2] += all; + } +} +#endif /* L_gcov_merge_single */ + +#ifdef L_gcov_merge_delta +/* The profile merging function for choosing the most common + difference between two consecutive evaluations of the value. It is + given an array COUNTERS of N_COUNTERS old counters and it reads the + same number of counters from the gcov file. The counters are split + into 4-tuples where the members of the tuple have meanings: + + -- the last value of the measured entity + -- the stored candidate on the most common difference + -- counter + -- total number of evaluations of the value */ +void +__gcov_merge_delta (gcov_type *counters, unsigned n_counters) +{ + unsigned i, n_measures; + gcov_type value, counter, all; + + gcc_assert (!(n_counters % 4)); + n_measures = n_counters / 4; + for (i = 0; i < n_measures; i++, counters += 4) + { + /* last = */ gcov_get_counter (); + value = gcov_get_counter_target (); + counter = gcov_get_counter (); + all = gcov_get_counter (); + + if (counters[1] == value) + counters[2] += counter; + else if (counter > counters[2]) + { + counters[1] = value; + counters[2] = counter - counters[2]; + } + else + counters[2] -= counter; + counters[3] += all; + } +} +#endif /* L_gcov_merge_delta */ +#endif /* inhibit_libc */
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/libgcov-profiler.c b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/libgcov-profiler.c new file mode 100644 index 0000000..7552ada --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/libgcov-profiler.c
@@ -0,0 +1,477 @@ +/* Routines required for instrumenting a program. */ +/* Compile this one with gcc. */ +/* Copyright (C) 1989-2014 Free Software Foundation, Inc. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify it under +the terms of the GNU General Public License as published by the Free +Software Foundation; either version 3, or (at your option) any later +version. + +GCC is distributed in the hope that it will be useful, but WITHOUT ANY +WARRANTY; without even the implied warranty of MERCHANTABILITY or +FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License +for more details. + +Under Section 7 of GPL version 3, you are granted additional +permissions described in the GCC Runtime Library Exception, version +3.1, as published by the Free Software Foundation. + +You should have received a copy of the GNU General Public License and +a copy of the GCC Runtime Library Exception along with this program; +see the files COPYING3 and COPYING.RUNTIME respectively. If not, see +<http://www.gnu.org/licenses/>. */ + +#include "libgcov.h" +#if !defined(inhibit_libc) + +#ifdef L_gcov_interval_profiler +/* If VALUE is in interval <START, START + STEPS - 1>, then increases the + corresponding counter in COUNTERS. If the VALUE is above or below + the interval, COUNTERS[STEPS] or COUNTERS[STEPS + 1] is increased + instead. */ + +void +__gcov_interval_profiler (gcov_type *counters, gcov_type value, + int start, unsigned steps) +{ + gcov_type delta = value - start; + if (delta < 0) + counters[steps + 1]++; + else if (delta >= steps) + counters[steps]++; + else + counters[delta]++; +} +#endif + +#ifdef L_gcov_pow2_profiler +/* If VALUE is a power of two, COUNTERS[1] is incremented. Otherwise + COUNTERS[0] is incremented. */ + +void +__gcov_pow2_profiler (gcov_type *counters, gcov_type value) +{ + if (value & (value - 1)) + counters[0]++; + else + counters[1]++; +} +#endif + +/* Tries to determine the most common value among its inputs. Checks if the + value stored in COUNTERS[0] matches VALUE. If this is the case, COUNTERS[1] + is incremented. If this is not the case and COUNTERS[1] is not zero, + COUNTERS[1] is decremented. Otherwise COUNTERS[1] is set to one and + VALUE is stored to COUNTERS[0]. This algorithm guarantees that if this + function is called more than 50% of the time with one value, this value + will be in COUNTERS[0] in the end. + + In any case, COUNTERS[2] is incremented. */ + +static inline void +__gcov_one_value_profiler_body (gcov_type *counters, gcov_type value) +{ + if (value == counters[0]) + counters[1]++; + else if (counters[1] == 0) + { + counters[1] = 1; + counters[0] = value; + } + else + counters[1]--; + counters[2]++; +} + +/* Atomic update version of __gcov_one_value_profile_body(). */ +static inline void +__gcov_one_value_profiler_body_atomic (gcov_type *counters, gcov_type value) +{ + if (value == counters[0]) + GCOV_TYPE_ATOMIC_FETCH_ADD_FN (&counters[1], 1, MEMMODEL_RELAXED); + else if (counters[1] == 0) + { + counters[1] = 1; + counters[0] = value; + } + else + GCOV_TYPE_ATOMIC_FETCH_ADD_FN (&counters[1], -1, MEMMODEL_RELAXED); + GCOV_TYPE_ATOMIC_FETCH_ADD_FN (&counters[2], 1, MEMMODEL_RELAXED); +} + + +#ifdef L_gcov_one_value_profiler +void +__gcov_one_value_profiler (gcov_type *counters, gcov_type value) +{ + __gcov_one_value_profiler_body (counters, value); +} + +void +__gcov_one_value_profiler_atomic (gcov_type *counters, gcov_type value) +{ + __gcov_one_value_profiler_body_atomic (counters, value); +} + + +#endif + +#ifdef L_gcov_indirect_call_profiler +/* This function exist only for workaround of binutils bug 14342. + Once this compatibility hack is obsolette, it can be removed. */ + +/* By default, the C++ compiler will use function addresses in the + vtable entries. Setting TARGET_VTABLE_USES_DESCRIPTORS to nonzero + tells the compiler to use function descriptors instead. The value + of this macro says how many words wide the descriptor is (normally 2), + but it may be dependent on target flags. Since we do not have access + to the target flags here we just check to see if it is set and use + that to set VTABLE_USES_DESCRIPTORS to 0 or 1. + + It is assumed that the address of a function descriptor may be treated + as a pointer to a function. */ + +#ifdef TARGET_VTABLE_USES_DESCRIPTORS +#define VTABLE_USES_DESCRIPTORS 1 +#else +#define VTABLE_USES_DESCRIPTORS 0 +#endif + +/* Tries to determine the most common value among its inputs. */ +void +__gcov_indirect_call_profiler (gcov_type* counter, gcov_type value, + void* cur_func, void* callee_func) +{ + /* If the C++ virtual tables contain function descriptors then one + function may have multiple descriptors and we need to dereference + the descriptors to see if they point to the same function. */ + if (cur_func == callee_func + || (VTABLE_USES_DESCRIPTORS && callee_func + && *(void **) cur_func == *(void **) callee_func)) + __gcov_one_value_profiler_body (counter, value); +} + + +/* Atomic update version of __gcov_indirect_call_profiler(). */ +void +__gcov_indirect_call_profiler_atomic (gcov_type* counter, gcov_type value, + void* cur_func, void* callee_func) +{ + if (cur_func == callee_func + || (VTABLE_USES_DESCRIPTORS && callee_func + && *(void **) cur_func == *(void **) callee_func)) + __gcov_one_value_profiler_body_atomic (counter, value); +} + + +#endif +#ifdef L_gcov_indirect_call_profiler_v2 + +/* These two variables are used to actually track caller and callee. Keep + them in TLS memory so races are not common (they are written to often). + The variables are set directly by GCC instrumented code, so declaration + here must match one in tree-profile.c */ + +#if defined(HAVE_CC_TLS) && !defined (USE_EMUTLS) +__thread +#endif +void * __gcov_indirect_call_callee; +#if defined(HAVE_CC_TLS) && !defined (USE_EMUTLS) +__thread +#endif +gcov_type * __gcov_indirect_call_counters; + +/* By default, the C++ compiler will use function addresses in the + vtable entries. Setting TARGET_VTABLE_USES_DESCRIPTORS to nonzero + tells the compiler to use function descriptors instead. The value + of this macro says how many words wide the descriptor is (normally 2), + but it may be dependent on target flags. Since we do not have access + to the target flags here we just check to see if it is set and use + that to set VTABLE_USES_DESCRIPTORS to 0 or 1. + + It is assumed that the address of a function descriptor may be treated + as a pointer to a function. */ + +#ifdef TARGET_VTABLE_USES_DESCRIPTORS +#define VTABLE_USES_DESCRIPTORS 1 +#else +#define VTABLE_USES_DESCRIPTORS 0 +#endif + +/* Tries to determine the most common value among its inputs. */ +void +__gcov_indirect_call_profiler_v2 (gcov_type value, void* cur_func) +{ + /* If the C++ virtual tables contain function descriptors then one + function may have multiple descriptors and we need to dereference + the descriptors to see if they point to the same function. */ + if (cur_func == __gcov_indirect_call_callee + || (VTABLE_USES_DESCRIPTORS && __gcov_indirect_call_callee + && *(void **) cur_func == *(void **) __gcov_indirect_call_callee)) + __gcov_one_value_profiler_body (__gcov_indirect_call_counters, value); +} + +void +__gcov_indirect_call_profiler_atomic_v2 (gcov_type value, void* cur_func) +{ + /* If the C++ virtual tables contain function descriptors then one + function may have multiple descriptors and we need to dereference + the descriptors to see if they point to the same function. */ + if (cur_func == __gcov_indirect_call_callee + || (VTABLE_USES_DESCRIPTORS && __gcov_indirect_call_callee + && *(void **) cur_func == *(void **) __gcov_indirect_call_callee)) + __gcov_one_value_profiler_body_atomic (__gcov_indirect_call_counters, value); +} + +#endif + +/* +#if defined(L_gcov_direct_call_profiler) || defined(L_gcov_indirect_call_topn_profiler) +__attribute__ ((weak)) gcov_unsigned_t __gcov_lipo_sampling_period; +#endif +*/ + +extern gcov_unsigned_t __gcov_lipo_sampling_period; + +#ifdef L_gcov_indirect_call_topn_profiler + +#include "gthr.h" + +#ifdef __GTHREAD_MUTEX_INIT +__thread int in_profiler; +ATTRIBUTE_HIDDEN __gthread_mutex_t __indir_topn_val_mx = __GTHREAD_MUTEX_INIT; +#endif + +/* Tries to keep track the most frequent N values in the counters where + N is specified by parameter TOPN_VAL. To track top N values, 2*N counter + entries are used. + counter[0] --- the accumative count of the number of times one entry in + in the counters gets evicted/replaced due to limited capacity. + When this value reaches a threshold, the bottom N values are + cleared. + counter[1] through counter[2*N] records the top 2*N values collected so far. + Each value is represented by two entries: count[2*i+1] is the ith value, and + count[2*i+2] is the number of times the value is seen. */ + +static void +__gcov_topn_value_profiler_body (gcov_type *counters, gcov_type value, + gcov_unsigned_t topn_val) +{ + unsigned i, found = 0, have_zero_count = 0; + + gcov_type *entry; + gcov_type *lfu_entry = &counters[1]; + gcov_type *value_array = &counters[1]; + gcov_type *num_eviction = &counters[0]; + + /* There are 2*topn_val values tracked, each value takes two slots in the + counter array */ +#ifdef __GTHREAD_MUTEX_INIT + /* If this is reentry, return. */ + if (in_profiler == 1) + return; + + in_profiler = 1; + __gthread_mutex_lock (&__indir_topn_val_mx); +#endif + for (i = 0; i < topn_val << 2; i += 2) + { + entry = &value_array[i]; + if (entry[0] == value) + { + entry[1]++ ; + found = 1; + break; + } + else if (entry[1] == 0) + { + lfu_entry = entry; + have_zero_count = 1; + } + else if (entry[1] < lfu_entry[1]) + lfu_entry = entry; + } + + if (found) + { + in_profiler = 0; +#ifdef __GTHREAD_MUTEX_INIT + __gthread_mutex_unlock (&__indir_topn_val_mx); +#endif + return; + } + + /* lfu_entry is either an empty entry or an entry + with lowest count, which will be evicted. */ + lfu_entry[0] = value; + lfu_entry[1] = 1; + +#define GCOV_ICALL_COUNTER_CLEAR_THRESHOLD 3000 + + /* Too many evictions -- time to clear bottom entries to + avoid hot values bumping each other out. */ + if (!have_zero_count + && ++*num_eviction >= GCOV_ICALL_COUNTER_CLEAR_THRESHOLD) + { + unsigned i, j; + gcov_type **p; + gcov_type **tmp_cnts + = (gcov_type **)alloca (topn_val * sizeof(gcov_type *)); + + *num_eviction = 0; + + /* Find the largest topn_val values from the group of + 2*topn_val values and put the addresses into tmp_cnts. */ + for (i = 0; i < topn_val; i++) + tmp_cnts[i] = &value_array[i * 2 + 1]; + + for (i = topn_val * 2; i < topn_val << 2; i += 2) + { + p = &tmp_cnts[0]; + for (j = 1; j < topn_val; j++) + if (*tmp_cnts[j] > **p) + p = &tmp_cnts[j]; + if (value_array[i + 1] < **p) + *p = &value_array[i + 1]; + } + + /* Zero out low value entries. */ + for (i = 0; i < topn_val; i++) + { + *tmp_cnts[i] = 0; + *(tmp_cnts[i] - 1) = 0; + } + } + +#ifdef __GTHREAD_MUTEX_INIT + in_profiler = 0; + __gthread_mutex_unlock (&__indir_topn_val_mx); +#endif +} + +#if defined(HAVE_CC_TLS) && !defined (USE_EMUTLS) +__thread +#endif +gcov_type *__gcov_indirect_call_topn_counters ATTRIBUTE_HIDDEN; + +#if defined(HAVE_CC_TLS) && !defined (USE_EMUTLS) +__thread +#endif +void *__gcov_indirect_call_topn_callee ATTRIBUTE_HIDDEN; + +#if defined(HAVE_CC_TLS) && !defined (USE_EMUTLS) +__thread +#endif +gcov_unsigned_t __gcov_indirect_call_sampling_counter ATTRIBUTE_HIDDEN; + +#ifdef TARGET_VTABLE_USES_DESCRIPTORS +#define VTABLE_USES_DESCRIPTORS 1 +#else +#define VTABLE_USES_DESCRIPTORS 0 +#endif +void +__gcov_indirect_call_topn_profiler (void *cur_func, + void *cur_module_gcov_info, + gcov_unsigned_t cur_func_id) +{ + void *callee_func = __gcov_indirect_call_topn_callee; + gcov_type *counter = __gcov_indirect_call_topn_counters; + /* If the C++ virtual tables contain function descriptors then one + function may have multiple descriptors and we need to dereference + the descriptors to see if they point to the same function. */ + if (cur_func == callee_func + || (VTABLE_USES_DESCRIPTORS && callee_func + && *(void **) cur_func == *(void **) callee_func)) + { + if (++__gcov_indirect_call_sampling_counter >= __gcov_lipo_sampling_period) + { + __gcov_indirect_call_sampling_counter = 0; + gcov_type global_id + = ((struct gcov_info *) cur_module_gcov_info)->mod_info->ident; + global_id = GEN_FUNC_GLOBAL_ID (global_id, cur_func_id); + __gcov_topn_value_profiler_body (counter, global_id, GCOV_ICALL_TOPN_VAL); + } + __gcov_indirect_call_topn_callee = 0; + } +} + +#endif + +#ifdef L_gcov_direct_call_profiler +#if defined(HAVE_CC_TLS) && !defined (USE_EMUTLS) +__thread +#endif +gcov_type *__gcov_direct_call_counters ATTRIBUTE_HIDDEN; +#if defined(HAVE_CC_TLS) && !defined (USE_EMUTLS) +__thread +#endif +void *__gcov_direct_call_callee ATTRIBUTE_HIDDEN; +#if defined(HAVE_CC_TLS) && !defined (USE_EMUTLS) +__thread +#endif +gcov_unsigned_t __gcov_direct_call_sampling_counter ATTRIBUTE_HIDDEN; + +/* Direct call profiler. */ + +void +__gcov_direct_call_profiler (void *cur_func, + void *cur_module_gcov_info, + gcov_unsigned_t cur_func_id) +{ + if (cur_func == __gcov_direct_call_callee) + { + if (++__gcov_direct_call_sampling_counter >= __gcov_lipo_sampling_period) + { + __gcov_direct_call_sampling_counter = 0; + gcov_type global_id + = ((struct gcov_info *) cur_module_gcov_info)->mod_info->ident; + global_id = GEN_FUNC_GLOBAL_ID (global_id, cur_func_id); + __gcov_direct_call_counters[0] = global_id; + __gcov_direct_call_counters[1]++; + } + __gcov_direct_call_callee = 0; + } +} +#endif + + +#ifdef L_gcov_time_profiler + +/* Counter for first visit of each function. */ +static gcov_type function_counter; + +/* Sets corresponding COUNTERS if there is no value. */ + +void +__gcov_time_profiler (gcov_type* counters) +{ + if (!counters[0]) + counters[0] = ++function_counter; +} +#endif + +#ifdef L_gcov_average_profiler +/* Increase corresponding COUNTER by VALUE. FIXME: Perhaps we want + to saturate up. */ + +void +__gcov_average_profiler (gcov_type *counters, gcov_type value) +{ + counters[0] += value; + counters[1] ++; +} +#endif + +#ifdef L_gcov_ior_profiler +/* Bitwise-OR VALUE into COUNTER. */ + +void +__gcov_ior_profiler (gcov_type *counters, gcov_type value) +{ + *counters |= value; +} +#endif + +#endif /* inhibit_libc */
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/libgcov.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/libgcov.h new file mode 100644 index 0000000..c1ebe6e --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/gcov-src/libgcov.h
@@ -0,0 +1,421 @@ +/* Header file for libgcov-*.c. + Copyright (C) 1996-2014 Free Software Foundation, Inc. + + This file is part of GCC. + + GCC is free software; you can redistribute it and/or modify it under + the terms of the GNU General Public License as published by the Free + Software Foundation; either version 3, or (at your option) any later + version. + + GCC is distributed in the hope that it will be useful, but WITHOUT ANY + WARRANTY; without even the implied warranty of MERCHANTABILITY or + FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License + for more details. + + Under Section 7 of GPL version 3, you are granted additional + permissions described in the GCC Runtime Library Exception, version + 3.1, as published by the Free Software Foundation. + + You should have received a copy of the GNU General Public License and + a copy of the GCC Runtime Library Exception along with this program; + see the files COPYING3 and COPYING.RUNTIME respectively. If not, see + <http://www.gnu.org/licenses/>. */ + +#ifndef GCC_LIBGCOV_H +#define GCC_LIBGCOV_H + +#ifndef __KERNEL__ +/* work around the poisoned malloc/calloc in system.h. */ +#ifndef xmalloc +#define xmalloc malloc +#endif +#ifndef xcalloc +#define xcalloc calloc +#endif +#ifndef xrealloc +#define xrealloc realloc +#endif +#ifndef xfree +#define xfree free +#endif +#else /* __KERNEL__ */ +#include "libgcov-kernel.h" +#endif /* __KERNEL__ */ + +#ifndef IN_GCOV_TOOL +/* About the target. */ +/* This path will be used by libgcov runtime. */ + +#ifndef __KERNEL__ +#include "tconfig.h" +#include "tsystem.h" +#include "coretypes.h" +#include "tm.h" +#include "libgcc_tm.h" +#endif /* __KERNEL__ */ + +#undef FUNC_ID_WIDTH +#undef FUNC_ID_MASK + +#if BITS_PER_UNIT == 8 +typedef unsigned gcov_unsigned_t __attribute__ ((mode (SI))); +typedef unsigned gcov_position_t __attribute__ ((mode (SI))); +#if LONG_LONG_TYPE_SIZE > 32 +typedef signed gcov_type __attribute__ ((mode (DI))); +typedef unsigned gcov_type_unsigned __attribute__ ((mode (DI))); +#define FUNC_ID_WIDTH 32 +#define FUNC_ID_MASK ((1ll << FUNC_ID_WIDTH) - 1) +#else +typedef signed gcov_type __attribute__ ((mode (SI))); +typedef unsigned gcov_type_unsigned __attribute__ ((mode (SI))); +#define FUNC_ID_WIDTH 16 +#define FUNC_ID_MASK ((1 << FUNC_ID_WIDTH) - 1) +#endif +#else /* BITS_PER_UNIT != 8 */ +#if BITS_PER_UNIT == 16 +typedef unsigned gcov_unsigned_t __attribute__ ((mode (HI))); +typedef unsigned gcov_position_t __attribute__ ((mode (HI))); +#if LONG_LONG_TYPE_SIZE > 32 +typedef signed gcov_type __attribute__ ((mode (SI))); +typedef unsigned gcov_type_unsigned __attribute__ ((mode (SI))); +#define FUNC_ID_WIDTH 32 +#define FUNC_ID_MASK ((1ll << FUNC_ID_WIDTH) - 1) +#else +typedef signed gcov_type __attribute__ ((mode (HI))); +typedef unsigned gcov_type_unsigned __attribute__ ((mode (HI))); +#define FUNC_ID_WIDTH 16 +#define FUNC_ID_MASK ((1 << FUNC_ID_WIDTH) - 1) +#endif +#else /* BITS_PER_UNIT != 16 */ +typedef unsigned gcov_unsigned_t __attribute__ ((mode (QI))); +typedef unsigned gcov_position_t __attribute__ ((mode (QI))); +#if LONG_LONG_TYPE_SIZE > 32 +typedef signed gcov_type __attribute__ ((mode (HI))); +typedef unsigned gcov_type_unsigned __attribute__ ((mode (HI))); +#define FUNC_ID_WIDTH 32 +#define FUNC_ID_MASK ((1ll << FUNC_ID_WIDTH) - 1) +#else +typedef signed gcov_type __attribute__ ((mode (QI))); +typedef unsigned gcov_type_unsigned __attribute__ ((mode (QI))); +#define FUNC_ID_WIDTH 16 +#define FUNC_ID_MASK ((1 << FUNC_ID_WIDTH) - 1) +#endif +#endif /* BITS_PER_UNIT == 16 */ +#endif /* BITS_PER_UNIT == 8 */ + +#if LONG_LONG_TYPE_SIZE > 32 +#define GCOV_TYPE_ATOMIC_FETCH_ADD_FN __atomic_fetch_add_8 +#define GCOV_TYPE_ATOMIC_FETCH_ADD BUILT_IN_ATOMIC_FETCH_ADD_8 +#else +#define GCOV_TYPE_ATOMIC_FETCH_ADD_FN __atomic_fetch_add_4 +#define GCOV_TYPE_ATOMIC_FETCH_ADD BUILT_IN_ATOMIC_FETCH_ADD_4 +#endif + +#if defined (TARGET_POSIX_IO) +#define GCOV_LOCKED 1 +#else +#define GCOV_LOCKED 0 +#endif + +#else /* IN_GCOV_TOOL */ +/* About the host. */ +/* This path will be compiled for the host and linked into + gcov-tool binary. */ + +#include "config.h" +#include "system.h" +#include "coretypes.h" +#include "tm.h" + +typedef unsigned gcov_unsigned_t; +typedef unsigned gcov_position_t; +/* gcov_type is typedef'd elsewhere for the compiler */ +#if defined (HOST_HAS_F_SETLKW) +#define GCOV_LOCKED 1 +#else +#define GCOV_LOCKED 0 +#endif + +#define FUNC_ID_WIDTH 32 +#define FUNC_ID_MASK ((1ll << FUNC_ID_WIDTH) - 1) + +/* Some Macros specific to gcov-tool. */ + +#define L_gcov 1 +#define L_gcov_merge_add 1 +#define L_gcov_merge_single 1 +#define L_gcov_merge_delta 1 +#define L_gcov_merge_ior 1 +#define L_gcov_merge_time_profile 1 +#define L_gcov_merge_icall_topn 1 +#define L_gcov_merge_dc 1 + +/* Make certian internal functions/variables in libgcov available for + gcov-tool access. */ +#define GCOV_TOOL_LINKAGE + +extern gcov_type gcov_read_counter_mem (); +extern unsigned gcov_get_merge_weight (); + +#endif /* !IN_GCOV_TOOL */ + +#undef EXTRACT_MODULE_ID_FROM_GLOBAL_ID +#undef EXTRACT_FUNC_ID_FROM_GLOBAL_ID +#undef GEN_FUNC_GLOBAL_ID +#define EXTRACT_MODULE_ID_FROM_GLOBAL_ID(gid) \ + (gcov_unsigned_t)(((gid) >> FUNC_ID_WIDTH) & FUNC_ID_MASK) +#define EXTRACT_FUNC_ID_FROM_GLOBAL_ID(gid) \ + (gcov_unsigned_t)((gid) & FUNC_ID_MASK) +#define GEN_FUNC_GLOBAL_ID(m,f) ((((gcov_type) (m)) << FUNC_ID_WIDTH) | (f)) + +#if defined(inhibit_libc) +#define IN_LIBGCOV (-1) +#else +#define IN_LIBGCOV 1 +#if defined(L_gcov) +#define GCOV_LINKAGE /* nothing */ +#endif +#endif + +/* In libgcov we need these functions to be extern, so prefix them with + __gcov. In libgcov they must also be hidden so that the instance in + the executable is not also used in a DSO. */ +#define gcov_var __gcov_var +#define gcov_open __gcov_open +#define gcov_close __gcov_close +#define gcov_write_tag_length __gcov_write_tag_length +#define gcov_position __gcov_position +#define gcov_seek __gcov_seek +#define gcov_rewrite __gcov_rewrite +#define gcov_truncate __gcov_truncate +#define gcov_is_error __gcov_is_error +#define gcov_write_unsigned __gcov_write_unsigned +#define gcov_write_counter __gcov_write_counter +#define gcov_write_summary __gcov_write_summary +#define gcov_write_module_info __gcov_write_module_info +#define gcov_read_unsigned __gcov_read_unsigned +#define gcov_read_counter __gcov_read_counter +#define gcov_read_summary __gcov_read_summary +#define gcov_read_buildinfo __gcov_read_buildinfo +#define gcov_read_module_info __gcov_read_module_info +#define gcov_sort_n_vals __gcov_sort_n_vals + +/* Poison these, so they don't accidentally slip in. */ +#pragma GCC poison gcov_write_string gcov_write_tag gcov_write_length +#pragma GCC poison gcov_time gcov_magic + +#ifdef HAVE_GAS_HIDDEN +#define ATTRIBUTE_HIDDEN __attribute__ ((__visibility__ ("hidden"))) +#else +#define ATTRIBUTE_HIDDEN +#endif + +#include "gcov-io.h" + +/* Structures embedded in coveraged program. The structures generated + by write_profile must match these. */ +/* Information about counters for a single function. */ +struct gcov_ctr_info +{ + gcov_unsigned_t num; /* number of counters. */ + gcov_type *values; /* their values. */ +}; + +/* Information about a single function. This uses the trailing array + idiom. The number of counters is determined from the merge pointer + array in gcov_info. The key is used to detect which of a set of + comdat functions was selected -- it points to the gcov_info object + of the object file containing the selected comdat function. */ + +struct gcov_fn_info +{ + const struct gcov_info *key; /* comdat key */ + gcov_unsigned_t ident; /* unique ident of function */ + gcov_unsigned_t lineno_checksum; /* function lineo_checksum */ + gcov_unsigned_t cfg_checksum; /* function cfg checksum */ + struct gcov_ctr_info ctrs[1]; /* instrumented counters */ +}; + +/* Type of function used to merge counters. */ +typedef void (*gcov_merge_fn) (gcov_type *, gcov_unsigned_t); + +/* Information about a single object file. */ +struct gcov_info +{ + gcov_unsigned_t version; /* expected version number */ + struct gcov_module_info *mod_info; /* addtional module info. */ + struct gcov_info *next; /* link to next, used by libgcov */ + + gcov_unsigned_t stamp; /* uniquifying time stamp */ + const char *filename; /* output file name */ + gcov_unsigned_t eof_pos; /* end position of profile data */ + gcov_merge_fn merge[GCOV_COUNTERS]; /* merge functions (null for + unused) */ + + unsigned n_functions; /* number of functions */ + +#if !defined (IN_GCOV_TOOL) && !defined (__KERNEL__) + const struct gcov_fn_info *const *functions; /* pointer to pointers + to function information */ +#elif defined (IN_GCOV_TOOL) + const struct gcov_fn_info **functions; +#else + struct gcov_fn_info **functions; +#endif /* !IN_GCOV_TOOL */ + char **build_info; /* strings to include in BUILD_INFO + section of gcda file. */ +}; + +/* Information about a single imported module. */ +struct dyn_imp_mod +{ + const struct gcov_info *imp_mod; + double weight; +}; + +/* Register a new object file module. */ +extern void __gcov_init (struct gcov_info *) ATTRIBUTE_HIDDEN; + +/* Set sampling rate to RATE. */ +extern void __gcov_set_sampling_rate (unsigned int rate); + +/* Called before fork, to avoid double counting. */ +extern void __gcov_flush (void) ATTRIBUTE_HIDDEN; + +/* Function to reset all counters to 0. */ +extern void __gcov_reset (void); +/* Function to enable early write of profile information so far. + __gcov_dump is also used by __gcov_dump_all. The latter + depends on __GCOV_DUMP to have hidden or protected visibility + so that each library has its own copy of the registered dumper. */ +extern void __gcov_dump (void) ATTRIBUTE_HIDDEN; + +/* Call __gcov_dump registered from each shared library. + This function must have default visibility. */ +void __gcov_dump_all (void); + +/* The merge function that just sums the counters. */ +extern void __gcov_merge_add (gcov_type *, unsigned) ATTRIBUTE_HIDDEN; + +/* The merge function to choose the most common value. */ +extern void __gcov_merge_single (gcov_type *, unsigned) ATTRIBUTE_HIDDEN; + +/* The merge function to choose the most common difference between + consecutive values. */ +extern void __gcov_merge_delta (gcov_type *, unsigned) ATTRIBUTE_HIDDEN; + +/* The merge function that just ors the counters together. */ +extern void __gcov_merge_ior (gcov_type *, unsigned) ATTRIBUTE_HIDDEN; + +/* The merge function used for direct call counters. */ +extern void __gcov_merge_dc (gcov_type *, unsigned) ATTRIBUTE_HIDDEN; + +/* The merge function used for indirect call counters. */ +extern void __gcov_merge_icall_topn (gcov_type *, unsigned) ATTRIBUTE_HIDDEN; + +extern void __gcov_merge_time_profile (gcov_type *, unsigned) ATTRIBUTE_HIDDEN; + +/* The profiler functions. */ +extern void __gcov_interval_profiler (gcov_type *, gcov_type, int, unsigned); +extern void __gcov_pow2_profiler (gcov_type *, gcov_type); +extern void __gcov_one_value_profiler (gcov_type *, gcov_type); +extern void __gcov_indirect_call_profiler (gcov_type*, gcov_type, + void*, void*); +extern void __gcov_indirect_call_profiler_v2 (gcov_type, void *); +extern void __gcov_indirect_call_topn_profiler (void *, void *, gcov_unsigned_t) ATTRIBUTE_HIDDEN; +extern void __gcov_direct_call_profiler (void *, void *, gcov_unsigned_t) ATTRIBUTE_HIDDEN; +extern void __gcov_average_profiler (gcov_type *, gcov_type); +extern void __gcov_ior_profiler (gcov_type *, gcov_type); +extern void __gcov_sort_n_vals (gcov_type *value_array, int n); +extern void __gcov_time_profiler (gcov_type *); + +#ifndef inhibit_libc +/* The wrappers around some library functions.. */ +extern pid_t __gcov_fork (void) ATTRIBUTE_HIDDEN; +extern int __gcov_execl (const char *, char *, ...) ATTRIBUTE_HIDDEN; +extern int __gcov_execlp (const char *, char *, ...) ATTRIBUTE_HIDDEN; +extern int __gcov_execle (const char *, char *, ...) ATTRIBUTE_HIDDEN; +extern int __gcov_execv (const char *, char *const []) ATTRIBUTE_HIDDEN; +extern int __gcov_execvp (const char *, char *const []) ATTRIBUTE_HIDDEN; +extern int __gcov_execve (const char *, char *const [], char *const []) + ATTRIBUTE_HIDDEN; + + +/* Functions that only available in libgcov. */ +GCOV_LINKAGE int gcov_open (const char */*name*/) ATTRIBUTE_HIDDEN; +GCOV_LINKAGE void gcov_write_counter (gcov_type) ATTRIBUTE_HIDDEN; +GCOV_LINKAGE void gcov_write_tag_length (gcov_unsigned_t, gcov_unsigned_t) + ATTRIBUTE_HIDDEN; +GCOV_LINKAGE void gcov_write_summary (gcov_unsigned_t /*tag*/, + const struct gcov_summary *) + ATTRIBUTE_HIDDEN; +GCOV_LINKAGE void gcov_seek (gcov_position_t /*position*/) ATTRIBUTE_HIDDEN; +GCOV_LINKAGE void gcov_truncate (void) ATTRIBUTE_HIDDEN; +void gcov_write_module_info (const struct gcov_info *, unsigned) + ATTRIBUTE_HIDDEN; +GCOV_LINKAGE void gcov_write_module_infos (struct gcov_info *mod_info) + ATTRIBUTE_HIDDEN; +GCOV_LINKAGE const struct dyn_imp_mod ** +gcov_get_sorted_import_module_array (struct gcov_info *mod_info, unsigned *len) + ATTRIBUTE_HIDDEN; +GCOV_LINKAGE inline void gcov_rewrite (void); + +extern void set_gcov_fn_fixed_up (int fixed_up); +extern int get_gcov_fn_fixed_up (void); + +/* "Counts" stored in gcda files can be a real counter value, or + an target address. When differentiate these two types because + when manipulating counts, we should only change real counter values, + rather target addresses. */ + +static inline gcov_type +gcov_get_counter (void) +{ +#ifndef IN_GCOV_TOOL + /* This version is for reading count values in libgcov runtime: + we read from gcda files. */ + + if (get_gcov_fn_fixed_up ()) + { + gcov_read_counter (); + return 0; + } + else + return gcov_read_counter (); +#else + /* This version is for gcov-tool. We read the value from memory and + multiply it by the merge weight. */ + + return gcov_read_counter_mem () * gcov_get_merge_weight (); +#endif +} + +/* Similar function as gcov_get_counter(), but handles target address + counters. */ + +static inline gcov_type +gcov_get_counter_target (void) +{ +#ifndef IN_GCOV_TOOL + /* This version is for reading count target values in libgcov runtime: + we read from gcda files. */ + + if (get_gcov_fn_fixed_up ()) + { + gcov_read_counter (); + return 0; + } + else + return gcov_read_counter (); +#else + /* This version is for gcov-tool. We read the value from memory and we do NOT + multiply it by the merge weight. */ + + return gcov_read_counter_mem (); +#endif +} + +#endif /* !inhibit_libc */ + +#endif /* GCC_LIBGCOV_H */
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include-fixed/README b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include-fixed/README new file mode 100644 index 0000000..7086a77 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include-fixed/README
@@ -0,0 +1,14 @@ +This README file is copied into the directory for GCC-only header files +when fixincludes is run by the makefile for GCC. + +Many of the files in this directory were automatically edited from the +standard system header files by the fixincludes process. They are +system-specific, and will not work on any other kind of system. They +are also not part of GCC. The reason we have to do this is because +GCC requires ANSI C headers and many vendors supply ANSI-incompatible +headers. + +Because this is an automated process, sometimes headers get "fixed" +that do not, strictly speaking, need a fix. As long as nothing is broken +by the process, it is just an unfortunate collateral inconvenience. +We would like to rectify it, if it is not "too inconvenient".
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include-fixed/limits.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include-fixed/limits.h new file mode 100644 index 0000000..8c6a4d3 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include-fixed/limits.h
@@ -0,0 +1,171 @@ +/* Copyright (C) 1992-2014 Free Software Foundation, Inc. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify it under +the terms of the GNU General Public License as published by the Free +Software Foundation; either version 3, or (at your option) any later +version. + +GCC is distributed in the hope that it will be useful, but WITHOUT ANY +WARRANTY; without even the implied warranty of MERCHANTABILITY or +FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License +for more details. + +Under Section 7 of GPL version 3, you are granted additional +permissions described in the GCC Runtime Library Exception, version +3.1, as published by the Free Software Foundation. + +You should have received a copy of the GNU General Public License and +a copy of the GCC Runtime Library Exception along with this program; +see the files COPYING3 and COPYING.RUNTIME respectively. If not, see +<http://www.gnu.org/licenses/>. */ + +/* This administrivia gets added to the beginning of limits.h + if the system has its own version of limits.h. */ + +/* We use _GCC_LIMITS_H_ because we want this not to match + any macros that the system's limits.h uses for its own purposes. */ +#ifndef _GCC_LIMITS_H_ /* Terminated in limity.h. */ +#define _GCC_LIMITS_H_ + +#ifndef _LIBC_LIMITS_H_ +/* Use "..." so that we find syslimits.h only in this same directory. */ +#include "syslimits.h" +#endif +/* Copyright (C) 1991-2014 Free Software Foundation, Inc. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify it under +the terms of the GNU General Public License as published by the Free +Software Foundation; either version 3, or (at your option) any later +version. + +GCC is distributed in the hope that it will be useful, but WITHOUT ANY +WARRANTY; without even the implied warranty of MERCHANTABILITY or +FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License +for more details. + +Under Section 7 of GPL version 3, you are granted additional +permissions described in the GCC Runtime Library Exception, version +3.1, as published by the Free Software Foundation. + +You should have received a copy of the GNU General Public License and +a copy of the GCC Runtime Library Exception along with this program; +see the files COPYING3 and COPYING.RUNTIME respectively. If not, see +<http://www.gnu.org/licenses/>. */ + +#ifndef _LIMITS_H___ +#define _LIMITS_H___ + +/* Number of bits in a `char'. */ +#undef CHAR_BIT +#define CHAR_BIT __CHAR_BIT__ + +/* Maximum length of a multibyte character. */ +#ifndef MB_LEN_MAX +#define MB_LEN_MAX 1 +#endif + +/* Minimum and maximum values a `signed char' can hold. */ +#undef SCHAR_MIN +#define SCHAR_MIN (-SCHAR_MAX - 1) +#undef SCHAR_MAX +#define SCHAR_MAX __SCHAR_MAX__ + +/* Maximum value an `unsigned char' can hold. (Minimum is 0). */ +#undef UCHAR_MAX +#if __SCHAR_MAX__ == __INT_MAX__ +# define UCHAR_MAX (SCHAR_MAX * 2U + 1U) +#else +# define UCHAR_MAX (SCHAR_MAX * 2 + 1) +#endif + +/* Minimum and maximum values a `char' can hold. */ +#ifdef __CHAR_UNSIGNED__ +# undef CHAR_MIN +# if __SCHAR_MAX__ == __INT_MAX__ +# define CHAR_MIN 0U +# else +# define CHAR_MIN 0 +# endif +# undef CHAR_MAX +# define CHAR_MAX UCHAR_MAX +#else +# undef CHAR_MIN +# define CHAR_MIN SCHAR_MIN +# undef CHAR_MAX +# define CHAR_MAX SCHAR_MAX +#endif + +/* Minimum and maximum values a `signed short int' can hold. */ +#undef SHRT_MIN +#define SHRT_MIN (-SHRT_MAX - 1) +#undef SHRT_MAX +#define SHRT_MAX __SHRT_MAX__ + +/* Maximum value an `unsigned short int' can hold. (Minimum is 0). */ +#undef USHRT_MAX +#if __SHRT_MAX__ == __INT_MAX__ +# define USHRT_MAX (SHRT_MAX * 2U + 1U) +#else +# define USHRT_MAX (SHRT_MAX * 2 + 1) +#endif + +/* Minimum and maximum values a `signed int' can hold. */ +#undef INT_MIN +#define INT_MIN (-INT_MAX - 1) +#undef INT_MAX +#define INT_MAX __INT_MAX__ + +/* Maximum value an `unsigned int' can hold. (Minimum is 0). */ +#undef UINT_MAX +#define UINT_MAX (INT_MAX * 2U + 1U) + +/* Minimum and maximum values a `signed long int' can hold. + (Same as `int'). */ +#undef LONG_MIN +#define LONG_MIN (-LONG_MAX - 1L) +#undef LONG_MAX +#define LONG_MAX __LONG_MAX__ + +/* Maximum value an `unsigned long int' can hold. (Minimum is 0). */ +#undef ULONG_MAX +#define ULONG_MAX (LONG_MAX * 2UL + 1UL) + +#if defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L +/* Minimum and maximum values a `signed long long int' can hold. */ +# undef LLONG_MIN +# define LLONG_MIN (-LLONG_MAX - 1LL) +# undef LLONG_MAX +# define LLONG_MAX __LONG_LONG_MAX__ + +/* Maximum value an `unsigned long long int' can hold. (Minimum is 0). */ +# undef ULLONG_MAX +# define ULLONG_MAX (LLONG_MAX * 2ULL + 1ULL) +#endif + +#if defined (__GNU_LIBRARY__) ? defined (__USE_GNU) : !defined (__STRICT_ANSI__) +/* Minimum and maximum values a `signed long long int' can hold. */ +# undef LONG_LONG_MIN +# define LONG_LONG_MIN (-LONG_LONG_MAX - 1LL) +# undef LONG_LONG_MAX +# define LONG_LONG_MAX __LONG_LONG_MAX__ + +/* Maximum value an `unsigned long long int' can hold. (Minimum is 0). */ +# undef ULONG_LONG_MAX +# define ULONG_LONG_MAX (LONG_LONG_MAX * 2ULL + 1ULL) +#endif + +#endif /* _LIMITS_H___ */ +/* This administrivia gets added to the end of limits.h + if the system has its own version of limits.h. */ + +#else /* not _GCC_LIMITS_H_ */ + +#ifdef _GCC_NEXT_LIMITS_H +#include_next <limits.h> /* recurse down to the real one */ +#endif + +#endif /* not _GCC_LIMITS_H_ */
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include-fixed/linux/a.out.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include-fixed/linux/a.out.h new file mode 100644 index 0000000..148f8c6 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include-fixed/linux/a.out.h
@@ -0,0 +1,235 @@ +/* DO NOT EDIT THIS FILE. + + It has been auto-edited by fixincludes from: + + "/tmp/8ee0b8157b3409c4b84fff35696d6c90/sysroot/usr/include/linux/a.out.h" + + This had to be done to correct non-standard usages in the + original, manufacturer supplied header file. */ + +/**************************************************************************** + **************************************************************************** + *** + *** This header was automatically generated from a Linux kernel header + *** of the same name, to make information necessary for userspace to + *** call into the kernel available to libc. It contains only constants, + *** structures, and macros generated from the original header, and thus, + *** contains no copyrightable information. + *** + *** To edit the content of this header, modify the corresponding + *** source file (e.g. under external/kernel-headers/original/) then + *** run bionic/libc/kernel/tools/update_all.py + *** + *** Any manual change here will be lost the next time this script will + *** be run. You've been warned! + *** + **************************************************************************** + ****************************************************************************/ +#ifndef _UAPI__A_OUT_GNU_H__ +#define _UAPI__A_OUT_GNU_H__ +#define __GNU_EXEC_MACROS__ +#ifndef __STRUCT_EXEC_OVERRIDE__ +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#include <asm/a.out.h> +#endif +#ifndef __ASSEMBLY__ +enum machine_type { +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#ifdef M_OLDSUN2 + M__OLDSUN2 = M_OLDSUN2, +#else + M_OLDSUN2 = 0, +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#endif +#ifdef M_68010 + M__68010 = M_68010, +#else +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ + M_68010 = 1, +#endif +#ifdef M_68020 + M__68020 = M_68020, +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#else + M_68020 = 2, +#endif +#ifdef M_SPARC +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ + M__SPARC = M_SPARC, +#else + M_SPARC = 3, +#endif +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ + M_386 = 100, + M_MIPS1 = 151, + M_MIPS2 = 152 +}; +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#ifndef N_MAGIC +#define N_MAGIC(exec) ((exec).a_info & 0xffff) +#endif +#define N_MACHTYPE(exec) ((enum machine_type)(((exec).a_info >> 16) & 0xff)) +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#define N_FLAGS(exec) (((exec).a_info >> 24) & 0xff) +#define N_SET_INFO(exec, magic, type, flags) ((exec).a_info = ((magic) & 0xffff) | (((int)(type) & 0xff) << 16) | (((flags) & 0xff) << 24)) +#define N_SET_MAGIC(exec, magic) ((exec).a_info = (((exec).a_info & 0xffff0000) | ((magic) & 0xffff))) +#define N_SET_MACHTYPE(exec, machtype) ((exec).a_info = ((exec).a_info&0xff00ffff) | ((((int)(machtype))&0xff) << 16)) +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#define N_SET_FLAGS(exec, flags) ((exec).a_info = ((exec).a_info&0x00ffffff) | (((flags) & 0xff) << 24)) +#define OMAGIC 0407 +#define NMAGIC 0410 +#define ZMAGIC 0413 +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#define QMAGIC 0314 +#define CMAGIC 0421 +#ifndef N_BADMAG +#define N_BADMAG(x) (N_MAGIC(x) != OMAGIC && N_MAGIC(x) != NMAGIC && N_MAGIC(x) != ZMAGIC && N_MAGIC(x) != QMAGIC) +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#endif +#define _N_HDROFF(x) (1024 - sizeof (struct exec)) +#ifndef N_TXTOFF +#define N_TXTOFF(x) (N_MAGIC(x) == ZMAGIC ? _N_HDROFF((x)) + sizeof (struct exec) : (N_MAGIC(x) == QMAGIC ? 0 : sizeof (struct exec))) +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#endif +#ifndef N_DATOFF +#define N_DATOFF(x) (N_TXTOFF(x) + (x).a_text) +#endif +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#ifndef N_TRELOFF +#define N_TRELOFF(x) (N_DATOFF(x) + (x).a_data) +#endif +#ifndef N_DRELOFF +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#define N_DRELOFF(x) (N_TRELOFF(x) + N_TRSIZE(x)) +#endif +#ifndef N_SYMOFF +#define N_SYMOFF(x) (N_DRELOFF(x) + N_DRSIZE(x)) +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#endif +#ifndef N_STROFF +#define N_STROFF(x) (N_SYMOFF(x) + N_SYMSIZE(x)) +#endif +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#ifndef N_TXTADDR +#define N_TXTADDR(x) (N_MAGIC(x) == QMAGIC ? PAGE_SIZE : 0) +#endif +#if defined(vax) || defined(hp300) || defined(pyr) +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#define SEGMENT_SIZE page_size +#endif +#ifdef sony +#define SEGMENT_SIZE 0x2000 +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#endif +#ifdef is68k +#define SEGMENT_SIZE 0x20000 +#endif +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#if defined(m68k) && defined(PORTAR) +#define PAGE_SIZE 0x400 +#define SEGMENT_SIZE PAGE_SIZE +#endif +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#ifdef __linux__ +#include <unistd.h> +#if defined(__i386__) || defined(__mc68000__) +#define SEGMENT_SIZE 1024 +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#else +#ifndef SEGMENT_SIZE +#define SEGMENT_SIZE getpagesize() +#endif +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#endif +#endif +#define _N_SEGMENT_ROUND(x) ALIGN(x, SEGMENT_SIZE) +#define _N_TXTENDADDR(x) (N_TXTADDR(x)+(x).a_text) +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#ifndef N_DATADDR +#define N_DATADDR(x) (N_MAGIC(x)==OMAGIC? (_N_TXTENDADDR(x)) : (_N_SEGMENT_ROUND (_N_TXTENDADDR(x)))) +#endif +#ifndef N_BSSADDR +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#define N_BSSADDR(x) (N_DATADDR(x) + (x).a_data) +#endif +#ifndef N_NLIST_DECLARED +struct nlist { +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ + union { + char *n_name; + struct nlist *n_next; + long n_strx; +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ + } n_un; + unsigned char n_type; + char n_other; + short n_desc; +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ + unsigned long n_value; +}; +#endif +#ifndef N_UNDF +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#define N_UNDF 0 +#endif +#ifndef N_ABS +#define N_ABS 2 +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#endif +#ifndef N_TEXT +#define N_TEXT 4 +#endif +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#ifndef N_DATA +#define N_DATA 6 +#endif +#ifndef N_BSS +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#define N_BSS 8 +#endif +#ifndef N_FN +#define N_FN 15 +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#endif +#ifndef N_EXT +#define N_EXT 1 +#endif +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#ifndef N_TYPE +#define N_TYPE 036 +#endif +#ifndef N_STAB +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#define N_STAB 0340 +#endif +#define N_INDR 0xa +#define N_SETA 0x14 +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#define N_SETT 0x16 +#define N_SETD 0x18 +#define N_SETB 0x1A +#define N_SETV 0x1C +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#ifndef N_RELOCATION_INFO_DECLARED +struct relocation_info +{ + int r_address; +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ + unsigned int r_symbolnum:24; + unsigned int r_pcrel:1; + unsigned int r_length:2; + unsigned int r_extern:1; +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#ifdef NS32K + unsigned r_bsr:1; + unsigned r_disp:1; + unsigned r_pad:2; +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#else + unsigned int r_pad:4; +#endif +}; +/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ +#endif +#endif +#endif
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include-fixed/syslimits.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include-fixed/syslimits.h new file mode 100644 index 0000000..a362802 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include-fixed/syslimits.h
@@ -0,0 +1,8 @@ +/* syslimits.h stands for the system's own limits.h file. + If we can use it ok unmodified, then we install this text. + If fixincludes fixes it, then the fixed version is installed + instead of this text. */ + +#define _GCC_NEXT_LIMITS_H /* tell gcc's limits.h to recurse */ +#include_next <limits.h> +#undef _GCC_NEXT_LIMITS_H
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/arm_neon.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/arm_neon.h new file mode 100644 index 0000000..ae0ae9c --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/arm_neon.h
@@ -0,0 +1,25391 @@ +/* ARM NEON intrinsics include file. + + Copyright (C) 2011-2014 Free Software Foundation, Inc. + Contributed by ARM Ltd. + + This file is part of GCC. + + GCC is free software; you can redistribute it and/or modify it + under the terms of the GNU General Public License as published + by the Free Software Foundation; either version 3, or (at your + option) any later version. + + GCC is distributed in the hope that it will be useful, but WITHOUT + ANY WARRANTY; without even the implied warranty of MERCHANTABILITY + or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public + License for more details. + + Under Section 7 of GPL version 3, you are granted additional + permissions described in the GCC Runtime Library Exception, version + 3.1, as published by the Free Software Foundation. + + You should have received a copy of the GNU General Public License and + a copy of the GCC Runtime Library Exception along with this program; + see the files COPYING3 and COPYING.RUNTIME respectively. If not, see + <http://www.gnu.org/licenses/>. */ + +#ifndef _AARCH64_NEON_H_ +#define _AARCH64_NEON_H_ + +#include <stdint.h> + +#define __AARCH64_UINT64_C(__C) ((uint64_t) __C) +#define __AARCH64_INT64_C(__C) ((int64_t) __C) + +typedef __builtin_aarch64_simd_qi int8x8_t + __attribute__ ((__vector_size__ (8))); +typedef __builtin_aarch64_simd_hi int16x4_t + __attribute__ ((__vector_size__ (8))); +typedef __builtin_aarch64_simd_si int32x2_t + __attribute__ ((__vector_size__ (8))); +typedef int64_t int64x1_t; +typedef double float64x1_t; +typedef __builtin_aarch64_simd_sf float32x2_t + __attribute__ ((__vector_size__ (8))); +typedef __builtin_aarch64_simd_poly8 poly8x8_t + __attribute__ ((__vector_size__ (8))); +typedef __builtin_aarch64_simd_poly16 poly16x4_t + __attribute__ ((__vector_size__ (8))); +typedef __builtin_aarch64_simd_uqi uint8x8_t + __attribute__ ((__vector_size__ (8))); +typedef __builtin_aarch64_simd_uhi uint16x4_t + __attribute__ ((__vector_size__ (8))); +typedef __builtin_aarch64_simd_usi uint32x2_t + __attribute__ ((__vector_size__ (8))); +typedef uint64_t uint64x1_t; +typedef __builtin_aarch64_simd_qi int8x16_t + __attribute__ ((__vector_size__ (16))); +typedef __builtin_aarch64_simd_hi int16x8_t + __attribute__ ((__vector_size__ (16))); +typedef __builtin_aarch64_simd_si int32x4_t + __attribute__ ((__vector_size__ (16))); +typedef __builtin_aarch64_simd_di int64x2_t + __attribute__ ((__vector_size__ (16))); +typedef __builtin_aarch64_simd_sf float32x4_t + __attribute__ ((__vector_size__ (16))); +typedef __builtin_aarch64_simd_df float64x2_t + __attribute__ ((__vector_size__ (16))); +typedef __builtin_aarch64_simd_poly8 poly8x16_t + __attribute__ ((__vector_size__ (16))); +typedef __builtin_aarch64_simd_poly16 poly16x8_t + __attribute__ ((__vector_size__ (16))); +typedef __builtin_aarch64_simd_poly64 poly64x2_t + __attribute__ ((__vector_size__ (16))); +typedef __builtin_aarch64_simd_uqi uint8x16_t + __attribute__ ((__vector_size__ (16))); +typedef __builtin_aarch64_simd_uhi uint16x8_t + __attribute__ ((__vector_size__ (16))); +typedef __builtin_aarch64_simd_usi uint32x4_t + __attribute__ ((__vector_size__ (16))); +typedef __builtin_aarch64_simd_udi uint64x2_t + __attribute__ ((__vector_size__ (16))); + +typedef float float32_t; +typedef double float64_t; +typedef __builtin_aarch64_simd_poly8 poly8_t; +typedef __builtin_aarch64_simd_poly16 poly16_t; +typedef __builtin_aarch64_simd_poly64 poly64_t; +typedef __builtin_aarch64_simd_poly128 poly128_t; + +typedef struct int8x8x2_t +{ + int8x8_t val[2]; +} int8x8x2_t; + +typedef struct int8x16x2_t +{ + int8x16_t val[2]; +} int8x16x2_t; + +typedef struct int16x4x2_t +{ + int16x4_t val[2]; +} int16x4x2_t; + +typedef struct int16x8x2_t +{ + int16x8_t val[2]; +} int16x8x2_t; + +typedef struct int32x2x2_t +{ + int32x2_t val[2]; +} int32x2x2_t; + +typedef struct int32x4x2_t +{ + int32x4_t val[2]; +} int32x4x2_t; + +typedef struct int64x1x2_t +{ + int64x1_t val[2]; +} int64x1x2_t; + +typedef struct int64x2x2_t +{ + int64x2_t val[2]; +} int64x2x2_t; + +typedef struct uint8x8x2_t +{ + uint8x8_t val[2]; +} uint8x8x2_t; + +typedef struct uint8x16x2_t +{ + uint8x16_t val[2]; +} uint8x16x2_t; + +typedef struct uint16x4x2_t +{ + uint16x4_t val[2]; +} uint16x4x2_t; + +typedef struct uint16x8x2_t +{ + uint16x8_t val[2]; +} uint16x8x2_t; + +typedef struct uint32x2x2_t +{ + uint32x2_t val[2]; +} uint32x2x2_t; + +typedef struct uint32x4x2_t +{ + uint32x4_t val[2]; +} uint32x4x2_t; + +typedef struct uint64x1x2_t +{ + uint64x1_t val[2]; +} uint64x1x2_t; + +typedef struct uint64x2x2_t +{ + uint64x2_t val[2]; +} uint64x2x2_t; + +typedef struct float32x2x2_t +{ + float32x2_t val[2]; +} float32x2x2_t; + +typedef struct float32x4x2_t +{ + float32x4_t val[2]; +} float32x4x2_t; + +typedef struct float64x2x2_t +{ + float64x2_t val[2]; +} float64x2x2_t; + +typedef struct float64x1x2_t +{ + float64x1_t val[2]; +} float64x1x2_t; + +typedef struct poly8x8x2_t +{ + poly8x8_t val[2]; +} poly8x8x2_t; + +typedef struct poly8x16x2_t +{ + poly8x16_t val[2]; +} poly8x16x2_t; + +typedef struct poly16x4x2_t +{ + poly16x4_t val[2]; +} poly16x4x2_t; + +typedef struct poly16x8x2_t +{ + poly16x8_t val[2]; +} poly16x8x2_t; + +typedef struct int8x8x3_t +{ + int8x8_t val[3]; +} int8x8x3_t; + +typedef struct int8x16x3_t +{ + int8x16_t val[3]; +} int8x16x3_t; + +typedef struct int16x4x3_t +{ + int16x4_t val[3]; +} int16x4x3_t; + +typedef struct int16x8x3_t +{ + int16x8_t val[3]; +} int16x8x3_t; + +typedef struct int32x2x3_t +{ + int32x2_t val[3]; +} int32x2x3_t; + +typedef struct int32x4x3_t +{ + int32x4_t val[3]; +} int32x4x3_t; + +typedef struct int64x1x3_t +{ + int64x1_t val[3]; +} int64x1x3_t; + +typedef struct int64x2x3_t +{ + int64x2_t val[3]; +} int64x2x3_t; + +typedef struct uint8x8x3_t +{ + uint8x8_t val[3]; +} uint8x8x3_t; + +typedef struct uint8x16x3_t +{ + uint8x16_t val[3]; +} uint8x16x3_t; + +typedef struct uint16x4x3_t +{ + uint16x4_t val[3]; +} uint16x4x3_t; + +typedef struct uint16x8x3_t +{ + uint16x8_t val[3]; +} uint16x8x3_t; + +typedef struct uint32x2x3_t +{ + uint32x2_t val[3]; +} uint32x2x3_t; + +typedef struct uint32x4x3_t +{ + uint32x4_t val[3]; +} uint32x4x3_t; + +typedef struct uint64x1x3_t +{ + uint64x1_t val[3]; +} uint64x1x3_t; + +typedef struct uint64x2x3_t +{ + uint64x2_t val[3]; +} uint64x2x3_t; + +typedef struct float32x2x3_t +{ + float32x2_t val[3]; +} float32x2x3_t; + +typedef struct float32x4x3_t +{ + float32x4_t val[3]; +} float32x4x3_t; + +typedef struct float64x2x3_t +{ + float64x2_t val[3]; +} float64x2x3_t; + +typedef struct float64x1x3_t +{ + float64x1_t val[3]; +} float64x1x3_t; + +typedef struct poly8x8x3_t +{ + poly8x8_t val[3]; +} poly8x8x3_t; + +typedef struct poly8x16x3_t +{ + poly8x16_t val[3]; +} poly8x16x3_t; + +typedef struct poly16x4x3_t +{ + poly16x4_t val[3]; +} poly16x4x3_t; + +typedef struct poly16x8x3_t +{ + poly16x8_t val[3]; +} poly16x8x3_t; + +typedef struct int8x8x4_t +{ + int8x8_t val[4]; +} int8x8x4_t; + +typedef struct int8x16x4_t +{ + int8x16_t val[4]; +} int8x16x4_t; + +typedef struct int16x4x4_t +{ + int16x4_t val[4]; +} int16x4x4_t; + +typedef struct int16x8x4_t +{ + int16x8_t val[4]; +} int16x8x4_t; + +typedef struct int32x2x4_t +{ + int32x2_t val[4]; +} int32x2x4_t; + +typedef struct int32x4x4_t +{ + int32x4_t val[4]; +} int32x4x4_t; + +typedef struct int64x1x4_t +{ + int64x1_t val[4]; +} int64x1x4_t; + +typedef struct int64x2x4_t +{ + int64x2_t val[4]; +} int64x2x4_t; + +typedef struct uint8x8x4_t +{ + uint8x8_t val[4]; +} uint8x8x4_t; + +typedef struct uint8x16x4_t +{ + uint8x16_t val[4]; +} uint8x16x4_t; + +typedef struct uint16x4x4_t +{ + uint16x4_t val[4]; +} uint16x4x4_t; + +typedef struct uint16x8x4_t +{ + uint16x8_t val[4]; +} uint16x8x4_t; + +typedef struct uint32x2x4_t +{ + uint32x2_t val[4]; +} uint32x2x4_t; + +typedef struct uint32x4x4_t +{ + uint32x4_t val[4]; +} uint32x4x4_t; + +typedef struct uint64x1x4_t +{ + uint64x1_t val[4]; +} uint64x1x4_t; + +typedef struct uint64x2x4_t +{ + uint64x2_t val[4]; +} uint64x2x4_t; + +typedef struct float32x2x4_t +{ + float32x2_t val[4]; +} float32x2x4_t; + +typedef struct float32x4x4_t +{ + float32x4_t val[4]; +} float32x4x4_t; + +typedef struct float64x2x4_t +{ + float64x2_t val[4]; +} float64x2x4_t; + +typedef struct float64x1x4_t +{ + float64x1_t val[4]; +} float64x1x4_t; + +typedef struct poly8x8x4_t +{ + poly8x8_t val[4]; +} poly8x8x4_t; + +typedef struct poly8x16x4_t +{ + poly8x16_t val[4]; +} poly8x16x4_t; + +typedef struct poly16x4x4_t +{ + poly16x4_t val[4]; +} poly16x4x4_t; + +typedef struct poly16x8x4_t +{ + poly16x8_t val[4]; +} poly16x8x4_t; + +/* vget_lane internal macros. */ + +#define __aarch64_vget_lane_any(__size, __cast_ret, __cast_a, __a, __b) \ + (__cast_ret \ + __builtin_aarch64_be_checked_get_lane##__size (__cast_a __a, __b)) + +#define __aarch64_vget_lane_f32(__a, __b) \ + __aarch64_vget_lane_any (v2sf, , , __a, __b) +#define __aarch64_vget_lane_f64(__a, __b) (__a) + +#define __aarch64_vget_lane_p8(__a, __b) \ + __aarch64_vget_lane_any (v8qi, (poly8_t), (int8x8_t), __a, __b) +#define __aarch64_vget_lane_p16(__a, __b) \ + __aarch64_vget_lane_any (v4hi, (poly16_t), (int16x4_t), __a, __b) + +#define __aarch64_vget_lane_s8(__a, __b) \ + __aarch64_vget_lane_any (v8qi, , ,__a, __b) +#define __aarch64_vget_lane_s16(__a, __b) \ + __aarch64_vget_lane_any (v4hi, , ,__a, __b) +#define __aarch64_vget_lane_s32(__a, __b) \ + __aarch64_vget_lane_any (v2si, , ,__a, __b) +#define __aarch64_vget_lane_s64(__a, __b) (__a) + +#define __aarch64_vget_lane_u8(__a, __b) \ + __aarch64_vget_lane_any (v8qi, (uint8_t), (int8x8_t), __a, __b) +#define __aarch64_vget_lane_u16(__a, __b) \ + __aarch64_vget_lane_any (v4hi, (uint16_t), (int16x4_t), __a, __b) +#define __aarch64_vget_lane_u32(__a, __b) \ + __aarch64_vget_lane_any (v2si, (uint32_t), (int32x2_t), __a, __b) +#define __aarch64_vget_lane_u64(__a, __b) (__a) + +#define __aarch64_vgetq_lane_f32(__a, __b) \ + __aarch64_vget_lane_any (v4sf, , , __a, __b) +#define __aarch64_vgetq_lane_f64(__a, __b) \ + __aarch64_vget_lane_any (v2df, , , __a, __b) + +#define __aarch64_vgetq_lane_p8(__a, __b) \ + __aarch64_vget_lane_any (v16qi, (poly8_t), (int8x16_t), __a, __b) +#define __aarch64_vgetq_lane_p16(__a, __b) \ + __aarch64_vget_lane_any (v8hi, (poly16_t), (int16x8_t), __a, __b) + +#define __aarch64_vgetq_lane_s8(__a, __b) \ + __aarch64_vget_lane_any (v16qi, , ,__a, __b) +#define __aarch64_vgetq_lane_s16(__a, __b) \ + __aarch64_vget_lane_any (v8hi, , ,__a, __b) +#define __aarch64_vgetq_lane_s32(__a, __b) \ + __aarch64_vget_lane_any (v4si, , ,__a, __b) +#define __aarch64_vgetq_lane_s64(__a, __b) \ + __aarch64_vget_lane_any (v2di, , ,__a, __b) + +#define __aarch64_vgetq_lane_u8(__a, __b) \ + __aarch64_vget_lane_any (v16qi, (uint8_t), (int8x16_t), __a, __b) +#define __aarch64_vgetq_lane_u16(__a, __b) \ + __aarch64_vget_lane_any (v8hi, (uint16_t), (int16x8_t), __a, __b) +#define __aarch64_vgetq_lane_u32(__a, __b) \ + __aarch64_vget_lane_any (v4si, (uint32_t), (int32x4_t), __a, __b) +#define __aarch64_vgetq_lane_u64(__a, __b) \ + __aarch64_vget_lane_any (v2di, (uint64_t), (int64x2_t), __a, __b) + +/* __aarch64_vdup_lane internal macros. */ +#define __aarch64_vdup_lane_any(__size, __q1, __q2, __a, __b) \ + vdup##__q1##_n_##__size (__aarch64_vget##__q2##_lane_##__size (__a, __b)) + +#define __aarch64_vdup_lane_f32(__a, __b) \ + __aarch64_vdup_lane_any (f32, , , __a, __b) +#define __aarch64_vdup_lane_f64(__a, __b) (__a) +#define __aarch64_vdup_lane_p8(__a, __b) \ + __aarch64_vdup_lane_any (p8, , , __a, __b) +#define __aarch64_vdup_lane_p16(__a, __b) \ + __aarch64_vdup_lane_any (p16, , , __a, __b) +#define __aarch64_vdup_lane_s8(__a, __b) \ + __aarch64_vdup_lane_any (s8, , , __a, __b) +#define __aarch64_vdup_lane_s16(__a, __b) \ + __aarch64_vdup_lane_any (s16, , , __a, __b) +#define __aarch64_vdup_lane_s32(__a, __b) \ + __aarch64_vdup_lane_any (s32, , , __a, __b) +#define __aarch64_vdup_lane_s64(__a, __b) (__a) +#define __aarch64_vdup_lane_u8(__a, __b) \ + __aarch64_vdup_lane_any (u8, , , __a, __b) +#define __aarch64_vdup_lane_u16(__a, __b) \ + __aarch64_vdup_lane_any (u16, , , __a, __b) +#define __aarch64_vdup_lane_u32(__a, __b) \ + __aarch64_vdup_lane_any (u32, , , __a, __b) +#define __aarch64_vdup_lane_u64(__a, __b) (__a) + +/* __aarch64_vdup_laneq internal macros. */ +#define __aarch64_vdup_laneq_f32(__a, __b) \ + __aarch64_vdup_lane_any (f32, , q, __a, __b) +#define __aarch64_vdup_laneq_f64(__a, __b) \ + __aarch64_vdup_lane_any (f64, , q, __a, __b) +#define __aarch64_vdup_laneq_p8(__a, __b) \ + __aarch64_vdup_lane_any (p8, , q, __a, __b) +#define __aarch64_vdup_laneq_p16(__a, __b) \ + __aarch64_vdup_lane_any (p16, , q, __a, __b) +#define __aarch64_vdup_laneq_s8(__a, __b) \ + __aarch64_vdup_lane_any (s8, , q, __a, __b) +#define __aarch64_vdup_laneq_s16(__a, __b) \ + __aarch64_vdup_lane_any (s16, , q, __a, __b) +#define __aarch64_vdup_laneq_s32(__a, __b) \ + __aarch64_vdup_lane_any (s32, , q, __a, __b) +#define __aarch64_vdup_laneq_s64(__a, __b) \ + __aarch64_vdup_lane_any (s64, , q, __a, __b) +#define __aarch64_vdup_laneq_u8(__a, __b) \ + __aarch64_vdup_lane_any (u8, , q, __a, __b) +#define __aarch64_vdup_laneq_u16(__a, __b) \ + __aarch64_vdup_lane_any (u16, , q, __a, __b) +#define __aarch64_vdup_laneq_u32(__a, __b) \ + __aarch64_vdup_lane_any (u32, , q, __a, __b) +#define __aarch64_vdup_laneq_u64(__a, __b) \ + __aarch64_vdup_lane_any (u64, , q, __a, __b) + +/* __aarch64_vdupq_lane internal macros. */ +#define __aarch64_vdupq_lane_f32(__a, __b) \ + __aarch64_vdup_lane_any (f32, q, , __a, __b) +#define __aarch64_vdupq_lane_f64(__a, __b) (vdupq_n_f64 (__a)) +#define __aarch64_vdupq_lane_p8(__a, __b) \ + __aarch64_vdup_lane_any (p8, q, , __a, __b) +#define __aarch64_vdupq_lane_p16(__a, __b) \ + __aarch64_vdup_lane_any (p16, q, , __a, __b) +#define __aarch64_vdupq_lane_s8(__a, __b) \ + __aarch64_vdup_lane_any (s8, q, , __a, __b) +#define __aarch64_vdupq_lane_s16(__a, __b) \ + __aarch64_vdup_lane_any (s16, q, , __a, __b) +#define __aarch64_vdupq_lane_s32(__a, __b) \ + __aarch64_vdup_lane_any (s32, q, , __a, __b) +#define __aarch64_vdupq_lane_s64(__a, __b) (vdupq_n_s64 (__a)) +#define __aarch64_vdupq_lane_u8(__a, __b) \ + __aarch64_vdup_lane_any (u8, q, , __a, __b) +#define __aarch64_vdupq_lane_u16(__a, __b) \ + __aarch64_vdup_lane_any (u16, q, , __a, __b) +#define __aarch64_vdupq_lane_u32(__a, __b) \ + __aarch64_vdup_lane_any (u32, q, , __a, __b) +#define __aarch64_vdupq_lane_u64(__a, __b) (vdupq_n_u64 (__a)) + +/* __aarch64_vdupq_laneq internal macros. */ +#define __aarch64_vdupq_laneq_f32(__a, __b) \ + __aarch64_vdup_lane_any (f32, q, q, __a, __b) +#define __aarch64_vdupq_laneq_f64(__a, __b) \ + __aarch64_vdup_lane_any (f64, q, q, __a, __b) +#define __aarch64_vdupq_laneq_p8(__a, __b) \ + __aarch64_vdup_lane_any (p8, q, q, __a, __b) +#define __aarch64_vdupq_laneq_p16(__a, __b) \ + __aarch64_vdup_lane_any (p16, q, q, __a, __b) +#define __aarch64_vdupq_laneq_s8(__a, __b) \ + __aarch64_vdup_lane_any (s8, q, q, __a, __b) +#define __aarch64_vdupq_laneq_s16(__a, __b) \ + __aarch64_vdup_lane_any (s16, q, q, __a, __b) +#define __aarch64_vdupq_laneq_s32(__a, __b) \ + __aarch64_vdup_lane_any (s32, q, q, __a, __b) +#define __aarch64_vdupq_laneq_s64(__a, __b) \ + __aarch64_vdup_lane_any (s64, q, q, __a, __b) +#define __aarch64_vdupq_laneq_u8(__a, __b) \ + __aarch64_vdup_lane_any (u8, q, q, __a, __b) +#define __aarch64_vdupq_laneq_u16(__a, __b) \ + __aarch64_vdup_lane_any (u16, q, q, __a, __b) +#define __aarch64_vdupq_laneq_u32(__a, __b) \ + __aarch64_vdup_lane_any (u32, q, q, __a, __b) +#define __aarch64_vdupq_laneq_u64(__a, __b) \ + __aarch64_vdup_lane_any (u64, q, q, __a, __b) + +/* vadd */ +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vadd_s8 (int8x8_t __a, int8x8_t __b) +{ + return __a + __b; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vadd_s16 (int16x4_t __a, int16x4_t __b) +{ + return __a + __b; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vadd_s32 (int32x2_t __a, int32x2_t __b) +{ + return __a + __b; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vadd_f32 (float32x2_t __a, float32x2_t __b) +{ + return __a + __b; +} + +__extension__ static __inline float64x1_t __attribute__ ((__always_inline__)) +vadd_f64 (float64x1_t __a, float64x1_t __b) +{ + return __a + __b; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vadd_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return __a + __b; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vadd_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return __a + __b; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vadd_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return __a + __b; +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vadd_s64 (int64x1_t __a, int64x1_t __b) +{ + return __a + __b; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vadd_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return __a + __b; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vaddq_s8 (int8x16_t __a, int8x16_t __b) +{ + return __a + __b; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vaddq_s16 (int16x8_t __a, int16x8_t __b) +{ + return __a + __b; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vaddq_s32 (int32x4_t __a, int32x4_t __b) +{ + return __a + __b; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vaddq_s64 (int64x2_t __a, int64x2_t __b) +{ + return __a + __b; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vaddq_f32 (float32x4_t __a, float32x4_t __b) +{ + return __a + __b; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vaddq_f64 (float64x2_t __a, float64x2_t __b) +{ + return __a + __b; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vaddq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return __a + __b; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vaddq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return __a + __b; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vaddq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return __a + __b; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vaddq_u64 (uint64x2_t __a, uint64x2_t __b) +{ + return __a + __b; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vaddl_s8 (int8x8_t __a, int8x8_t __b) +{ + return (int16x8_t) __builtin_aarch64_saddlv8qi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vaddl_s16 (int16x4_t __a, int16x4_t __b) +{ + return (int32x4_t) __builtin_aarch64_saddlv4hi (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vaddl_s32 (int32x2_t __a, int32x2_t __b) +{ + return (int64x2_t) __builtin_aarch64_saddlv2si (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vaddl_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_uaddlv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vaddl_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_uaddlv4hi ((int16x4_t) __a, + (int16x4_t) __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vaddl_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_uaddlv2si ((int32x2_t) __a, + (int32x2_t) __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vaddl_high_s8 (int8x16_t __a, int8x16_t __b) +{ + return (int16x8_t) __builtin_aarch64_saddl2v16qi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vaddl_high_s16 (int16x8_t __a, int16x8_t __b) +{ + return (int32x4_t) __builtin_aarch64_saddl2v8hi (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vaddl_high_s32 (int32x4_t __a, int32x4_t __b) +{ + return (int64x2_t) __builtin_aarch64_saddl2v4si (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vaddl_high_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return (uint16x8_t) __builtin_aarch64_uaddl2v16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vaddl_high_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return (uint32x4_t) __builtin_aarch64_uaddl2v8hi ((int16x8_t) __a, + (int16x8_t) __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vaddl_high_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return (uint64x2_t) __builtin_aarch64_uaddl2v4si ((int32x4_t) __a, + (int32x4_t) __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vaddw_s8 (int16x8_t __a, int8x8_t __b) +{ + return (int16x8_t) __builtin_aarch64_saddwv8qi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vaddw_s16 (int32x4_t __a, int16x4_t __b) +{ + return (int32x4_t) __builtin_aarch64_saddwv4hi (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vaddw_s32 (int64x2_t __a, int32x2_t __b) +{ + return (int64x2_t) __builtin_aarch64_saddwv2si (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vaddw_u8 (uint16x8_t __a, uint8x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_uaddwv8qi ((int16x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vaddw_u16 (uint32x4_t __a, uint16x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_uaddwv4hi ((int32x4_t) __a, + (int16x4_t) __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vaddw_u32 (uint64x2_t __a, uint32x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_uaddwv2si ((int64x2_t) __a, + (int32x2_t) __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vaddw_high_s8 (int16x8_t __a, int8x16_t __b) +{ + return (int16x8_t) __builtin_aarch64_saddw2v16qi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vaddw_high_s16 (int32x4_t __a, int16x8_t __b) +{ + return (int32x4_t) __builtin_aarch64_saddw2v8hi (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vaddw_high_s32 (int64x2_t __a, int32x4_t __b) +{ + return (int64x2_t) __builtin_aarch64_saddw2v4si (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vaddw_high_u8 (uint16x8_t __a, uint8x16_t __b) +{ + return (uint16x8_t) __builtin_aarch64_uaddw2v16qi ((int16x8_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vaddw_high_u16 (uint32x4_t __a, uint16x8_t __b) +{ + return (uint32x4_t) __builtin_aarch64_uaddw2v8hi ((int32x4_t) __a, + (int16x8_t) __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vaddw_high_u32 (uint64x2_t __a, uint32x4_t __b) +{ + return (uint64x2_t) __builtin_aarch64_uaddw2v4si ((int64x2_t) __a, + (int32x4_t) __b); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vhadd_s8 (int8x8_t __a, int8x8_t __b) +{ + return (int8x8_t) __builtin_aarch64_shaddv8qi (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vhadd_s16 (int16x4_t __a, int16x4_t __b) +{ + return (int16x4_t) __builtin_aarch64_shaddv4hi (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vhadd_s32 (int32x2_t __a, int32x2_t __b) +{ + return (int32x2_t) __builtin_aarch64_shaddv2si (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vhadd_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_uhaddv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vhadd_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_uhaddv4hi ((int16x4_t) __a, + (int16x4_t) __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vhadd_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_uhaddv2si ((int32x2_t) __a, + (int32x2_t) __b); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vhaddq_s8 (int8x16_t __a, int8x16_t __b) +{ + return (int8x16_t) __builtin_aarch64_shaddv16qi (__a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vhaddq_s16 (int16x8_t __a, int16x8_t __b) +{ + return (int16x8_t) __builtin_aarch64_shaddv8hi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vhaddq_s32 (int32x4_t __a, int32x4_t __b) +{ + return (int32x4_t) __builtin_aarch64_shaddv4si (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vhaddq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_uhaddv16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vhaddq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_uhaddv8hi ((int16x8_t) __a, + (int16x8_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vhaddq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_uhaddv4si ((int32x4_t) __a, + (int32x4_t) __b); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vrhadd_s8 (int8x8_t __a, int8x8_t __b) +{ + return (int8x8_t) __builtin_aarch64_srhaddv8qi (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vrhadd_s16 (int16x4_t __a, int16x4_t __b) +{ + return (int16x4_t) __builtin_aarch64_srhaddv4hi (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vrhadd_s32 (int32x2_t __a, int32x2_t __b) +{ + return (int32x2_t) __builtin_aarch64_srhaddv2si (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vrhadd_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_urhaddv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vrhadd_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_urhaddv4hi ((int16x4_t) __a, + (int16x4_t) __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vrhadd_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_urhaddv2si ((int32x2_t) __a, + (int32x2_t) __b); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vrhaddq_s8 (int8x16_t __a, int8x16_t __b) +{ + return (int8x16_t) __builtin_aarch64_srhaddv16qi (__a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vrhaddq_s16 (int16x8_t __a, int16x8_t __b) +{ + return (int16x8_t) __builtin_aarch64_srhaddv8hi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vrhaddq_s32 (int32x4_t __a, int32x4_t __b) +{ + return (int32x4_t) __builtin_aarch64_srhaddv4si (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vrhaddq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_urhaddv16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vrhaddq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_urhaddv8hi ((int16x8_t) __a, + (int16x8_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vrhaddq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_urhaddv4si ((int32x4_t) __a, + (int32x4_t) __b); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vaddhn_s16 (int16x8_t __a, int16x8_t __b) +{ + return (int8x8_t) __builtin_aarch64_addhnv8hi (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vaddhn_s32 (int32x4_t __a, int32x4_t __b) +{ + return (int16x4_t) __builtin_aarch64_addhnv4si (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vaddhn_s64 (int64x2_t __a, int64x2_t __b) +{ + return (int32x2_t) __builtin_aarch64_addhnv2di (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vaddhn_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_addhnv8hi ((int16x8_t) __a, + (int16x8_t) __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vaddhn_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_addhnv4si ((int32x4_t) __a, + (int32x4_t) __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vaddhn_u64 (uint64x2_t __a, uint64x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_addhnv2di ((int64x2_t) __a, + (int64x2_t) __b); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vraddhn_s16 (int16x8_t __a, int16x8_t __b) +{ + return (int8x8_t) __builtin_aarch64_raddhnv8hi (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vraddhn_s32 (int32x4_t __a, int32x4_t __b) +{ + return (int16x4_t) __builtin_aarch64_raddhnv4si (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vraddhn_s64 (int64x2_t __a, int64x2_t __b) +{ + return (int32x2_t) __builtin_aarch64_raddhnv2di (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vraddhn_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_raddhnv8hi ((int16x8_t) __a, + (int16x8_t) __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vraddhn_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_raddhnv4si ((int32x4_t) __a, + (int32x4_t) __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vraddhn_u64 (uint64x2_t __a, uint64x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_raddhnv2di ((int64x2_t) __a, + (int64x2_t) __b); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vaddhn_high_s16 (int8x8_t __a, int16x8_t __b, int16x8_t __c) +{ + return (int8x16_t) __builtin_aarch64_addhn2v8hi (__a, __b, __c); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vaddhn_high_s32 (int16x4_t __a, int32x4_t __b, int32x4_t __c) +{ + return (int16x8_t) __builtin_aarch64_addhn2v4si (__a, __b, __c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vaddhn_high_s64 (int32x2_t __a, int64x2_t __b, int64x2_t __c) +{ + return (int32x4_t) __builtin_aarch64_addhn2v2di (__a, __b, __c); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vaddhn_high_u16 (uint8x8_t __a, uint16x8_t __b, uint16x8_t __c) +{ + return (uint8x16_t) __builtin_aarch64_addhn2v8hi ((int8x8_t) __a, + (int16x8_t) __b, + (int16x8_t) __c); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vaddhn_high_u32 (uint16x4_t __a, uint32x4_t __b, uint32x4_t __c) +{ + return (uint16x8_t) __builtin_aarch64_addhn2v4si ((int16x4_t) __a, + (int32x4_t) __b, + (int32x4_t) __c); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vaddhn_high_u64 (uint32x2_t __a, uint64x2_t __b, uint64x2_t __c) +{ + return (uint32x4_t) __builtin_aarch64_addhn2v2di ((int32x2_t) __a, + (int64x2_t) __b, + (int64x2_t) __c); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vraddhn_high_s16 (int8x8_t __a, int16x8_t __b, int16x8_t __c) +{ + return (int8x16_t) __builtin_aarch64_raddhn2v8hi (__a, __b, __c); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vraddhn_high_s32 (int16x4_t __a, int32x4_t __b, int32x4_t __c) +{ + return (int16x8_t) __builtin_aarch64_raddhn2v4si (__a, __b, __c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vraddhn_high_s64 (int32x2_t __a, int64x2_t __b, int64x2_t __c) +{ + return (int32x4_t) __builtin_aarch64_raddhn2v2di (__a, __b, __c); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vraddhn_high_u16 (uint8x8_t __a, uint16x8_t __b, uint16x8_t __c) +{ + return (uint8x16_t) __builtin_aarch64_raddhn2v8hi ((int8x8_t) __a, + (int16x8_t) __b, + (int16x8_t) __c); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vraddhn_high_u32 (uint16x4_t __a, uint32x4_t __b, uint32x4_t __c) +{ + return (uint16x8_t) __builtin_aarch64_raddhn2v4si ((int16x4_t) __a, + (int32x4_t) __b, + (int32x4_t) __c); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vraddhn_high_u64 (uint32x2_t __a, uint64x2_t __b, uint64x2_t __c) +{ + return (uint32x4_t) __builtin_aarch64_raddhn2v2di ((int32x2_t) __a, + (int64x2_t) __b, + (int64x2_t) __c); +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vdiv_f32 (float32x2_t __a, float32x2_t __b) +{ + return __a / __b; +} + +__extension__ static __inline float64x1_t __attribute__ ((__always_inline__)) +vdiv_f64 (float64x1_t __a, float64x1_t __b) +{ + return __a / __b; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vdivq_f32 (float32x4_t __a, float32x4_t __b) +{ + return __a / __b; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vdivq_f64 (float64x2_t __a, float64x2_t __b) +{ + return __a / __b; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vmul_s8 (int8x8_t __a, int8x8_t __b) +{ + return __a * __b; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vmul_s16 (int16x4_t __a, int16x4_t __b) +{ + return __a * __b; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vmul_s32 (int32x2_t __a, int32x2_t __b) +{ + return __a * __b; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vmul_f32 (float32x2_t __a, float32x2_t __b) +{ + return __a * __b; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vmul_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return __a * __b; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vmul_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return __a * __b; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vmul_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return __a * __b; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vmul_p8 (poly8x8_t __a, poly8x8_t __b) +{ + return (poly8x8_t) __builtin_aarch64_pmulv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vmulq_s8 (int8x16_t __a, int8x16_t __b) +{ + return __a * __b; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmulq_s16 (int16x8_t __a, int16x8_t __b) +{ + return __a * __b; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmulq_s32 (int32x4_t __a, int32x4_t __b) +{ + return __a * __b; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vmulq_f32 (float32x4_t __a, float32x4_t __b) +{ + return __a * __b; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vmulq_f64 (float64x2_t __a, float64x2_t __b) +{ + return __a * __b; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vmulq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return __a * __b; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmulq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return __a * __b; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmulq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return __a * __b; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vmulq_p8 (poly8x16_t __a, poly8x16_t __b) +{ + return (poly8x16_t) __builtin_aarch64_pmulv16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vand_s8 (int8x8_t __a, int8x8_t __b) +{ + return __a & __b; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vand_s16 (int16x4_t __a, int16x4_t __b) +{ + return __a & __b; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vand_s32 (int32x2_t __a, int32x2_t __b) +{ + return __a & __b; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vand_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return __a & __b; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vand_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return __a & __b; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vand_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return __a & __b; +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vand_s64 (int64x1_t __a, int64x1_t __b) +{ + return __a & __b; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vand_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return __a & __b; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vandq_s8 (int8x16_t __a, int8x16_t __b) +{ + return __a & __b; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vandq_s16 (int16x8_t __a, int16x8_t __b) +{ + return __a & __b; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vandq_s32 (int32x4_t __a, int32x4_t __b) +{ + return __a & __b; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vandq_s64 (int64x2_t __a, int64x2_t __b) +{ + return __a & __b; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vandq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return __a & __b; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vandq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return __a & __b; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vandq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return __a & __b; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vandq_u64 (uint64x2_t __a, uint64x2_t __b) +{ + return __a & __b; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vorr_s8 (int8x8_t __a, int8x8_t __b) +{ + return __a | __b; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vorr_s16 (int16x4_t __a, int16x4_t __b) +{ + return __a | __b; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vorr_s32 (int32x2_t __a, int32x2_t __b) +{ + return __a | __b; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vorr_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return __a | __b; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vorr_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return __a | __b; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vorr_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return __a | __b; +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vorr_s64 (int64x1_t __a, int64x1_t __b) +{ + return __a | __b; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vorr_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return __a | __b; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vorrq_s8 (int8x16_t __a, int8x16_t __b) +{ + return __a | __b; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vorrq_s16 (int16x8_t __a, int16x8_t __b) +{ + return __a | __b; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vorrq_s32 (int32x4_t __a, int32x4_t __b) +{ + return __a | __b; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vorrq_s64 (int64x2_t __a, int64x2_t __b) +{ + return __a | __b; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vorrq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return __a | __b; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vorrq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return __a | __b; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vorrq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return __a | __b; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vorrq_u64 (uint64x2_t __a, uint64x2_t __b) +{ + return __a | __b; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +veor_s8 (int8x8_t __a, int8x8_t __b) +{ + return __a ^ __b; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +veor_s16 (int16x4_t __a, int16x4_t __b) +{ + return __a ^ __b; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +veor_s32 (int32x2_t __a, int32x2_t __b) +{ + return __a ^ __b; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +veor_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return __a ^ __b; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +veor_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return __a ^ __b; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +veor_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return __a ^ __b; +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +veor_s64 (int64x1_t __a, int64x1_t __b) +{ + return __a ^ __b; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +veor_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return __a ^ __b; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +veorq_s8 (int8x16_t __a, int8x16_t __b) +{ + return __a ^ __b; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +veorq_s16 (int16x8_t __a, int16x8_t __b) +{ + return __a ^ __b; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +veorq_s32 (int32x4_t __a, int32x4_t __b) +{ + return __a ^ __b; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +veorq_s64 (int64x2_t __a, int64x2_t __b) +{ + return __a ^ __b; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +veorq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return __a ^ __b; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +veorq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return __a ^ __b; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +veorq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return __a ^ __b; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +veorq_u64 (uint64x2_t __a, uint64x2_t __b) +{ + return __a ^ __b; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vbic_s8 (int8x8_t __a, int8x8_t __b) +{ + return __a & ~__b; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vbic_s16 (int16x4_t __a, int16x4_t __b) +{ + return __a & ~__b; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vbic_s32 (int32x2_t __a, int32x2_t __b) +{ + return __a & ~__b; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vbic_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return __a & ~__b; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vbic_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return __a & ~__b; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vbic_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return __a & ~__b; +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vbic_s64 (int64x1_t __a, int64x1_t __b) +{ + return __a & ~__b; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vbic_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return __a & ~__b; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vbicq_s8 (int8x16_t __a, int8x16_t __b) +{ + return __a & ~__b; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vbicq_s16 (int16x8_t __a, int16x8_t __b) +{ + return __a & ~__b; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vbicq_s32 (int32x4_t __a, int32x4_t __b) +{ + return __a & ~__b; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vbicq_s64 (int64x2_t __a, int64x2_t __b) +{ + return __a & ~__b; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vbicq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return __a & ~__b; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vbicq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return __a & ~__b; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vbicq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return __a & ~__b; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vbicq_u64 (uint64x2_t __a, uint64x2_t __b) +{ + return __a & ~__b; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vorn_s8 (int8x8_t __a, int8x8_t __b) +{ + return __a | ~__b; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vorn_s16 (int16x4_t __a, int16x4_t __b) +{ + return __a | ~__b; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vorn_s32 (int32x2_t __a, int32x2_t __b) +{ + return __a | ~__b; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vorn_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return __a | ~__b; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vorn_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return __a | ~__b; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vorn_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return __a | ~__b; +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vorn_s64 (int64x1_t __a, int64x1_t __b) +{ + return __a | ~__b; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vorn_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return __a | ~__b; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vornq_s8 (int8x16_t __a, int8x16_t __b) +{ + return __a | ~__b; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vornq_s16 (int16x8_t __a, int16x8_t __b) +{ + return __a | ~__b; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vornq_s32 (int32x4_t __a, int32x4_t __b) +{ + return __a | ~__b; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vornq_s64 (int64x2_t __a, int64x2_t __b) +{ + return __a | ~__b; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vornq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return __a | ~__b; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vornq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return __a | ~__b; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vornq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return __a | ~__b; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vornq_u64 (uint64x2_t __a, uint64x2_t __b) +{ + return __a | ~__b; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vsub_s8 (int8x8_t __a, int8x8_t __b) +{ + return __a - __b; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vsub_s16 (int16x4_t __a, int16x4_t __b) +{ + return __a - __b; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vsub_s32 (int32x2_t __a, int32x2_t __b) +{ + return __a - __b; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vsub_f32 (float32x2_t __a, float32x2_t __b) +{ + return __a - __b; +} + +__extension__ static __inline float64x1_t __attribute__ ((__always_inline__)) +vsub_f64 (float64x1_t __a, float64x1_t __b) +{ + return __a - __b; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vsub_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return __a - __b; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vsub_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return __a - __b; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vsub_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return __a - __b; +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vsub_s64 (int64x1_t __a, int64x1_t __b) +{ + return __a - __b; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vsub_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return __a - __b; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vsubq_s8 (int8x16_t __a, int8x16_t __b) +{ + return __a - __b; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vsubq_s16 (int16x8_t __a, int16x8_t __b) +{ + return __a - __b; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vsubq_s32 (int32x4_t __a, int32x4_t __b) +{ + return __a - __b; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vsubq_s64 (int64x2_t __a, int64x2_t __b) +{ + return __a - __b; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vsubq_f32 (float32x4_t __a, float32x4_t __b) +{ + return __a - __b; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vsubq_f64 (float64x2_t __a, float64x2_t __b) +{ + return __a - __b; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vsubq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return __a - __b; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vsubq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return __a - __b; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vsubq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return __a - __b; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vsubq_u64 (uint64x2_t __a, uint64x2_t __b) +{ + return __a - __b; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vsubl_s8 (int8x8_t __a, int8x8_t __b) +{ + return (int16x8_t) __builtin_aarch64_ssublv8qi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vsubl_s16 (int16x4_t __a, int16x4_t __b) +{ + return (int32x4_t) __builtin_aarch64_ssublv4hi (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vsubl_s32 (int32x2_t __a, int32x2_t __b) +{ + return (int64x2_t) __builtin_aarch64_ssublv2si (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vsubl_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_usublv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vsubl_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_usublv4hi ((int16x4_t) __a, + (int16x4_t) __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vsubl_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_usublv2si ((int32x2_t) __a, + (int32x2_t) __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vsubl_high_s8 (int8x16_t __a, int8x16_t __b) +{ + return (int16x8_t) __builtin_aarch64_ssubl2v16qi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vsubl_high_s16 (int16x8_t __a, int16x8_t __b) +{ + return (int32x4_t) __builtin_aarch64_ssubl2v8hi (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vsubl_high_s32 (int32x4_t __a, int32x4_t __b) +{ + return (int64x2_t) __builtin_aarch64_ssubl2v4si (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vsubl_high_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return (uint16x8_t) __builtin_aarch64_usubl2v16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vsubl_high_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return (uint32x4_t) __builtin_aarch64_usubl2v8hi ((int16x8_t) __a, + (int16x8_t) __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vsubl_high_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return (uint64x2_t) __builtin_aarch64_usubl2v4si ((int32x4_t) __a, + (int32x4_t) __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vsubw_s8 (int16x8_t __a, int8x8_t __b) +{ + return (int16x8_t) __builtin_aarch64_ssubwv8qi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vsubw_s16 (int32x4_t __a, int16x4_t __b) +{ + return (int32x4_t) __builtin_aarch64_ssubwv4hi (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vsubw_s32 (int64x2_t __a, int32x2_t __b) +{ + return (int64x2_t) __builtin_aarch64_ssubwv2si (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vsubw_u8 (uint16x8_t __a, uint8x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_usubwv8qi ((int16x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vsubw_u16 (uint32x4_t __a, uint16x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_usubwv4hi ((int32x4_t) __a, + (int16x4_t) __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vsubw_u32 (uint64x2_t __a, uint32x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_usubwv2si ((int64x2_t) __a, + (int32x2_t) __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vsubw_high_s8 (int16x8_t __a, int8x16_t __b) +{ + return (int16x8_t) __builtin_aarch64_ssubw2v16qi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vsubw_high_s16 (int32x4_t __a, int16x8_t __b) +{ + return (int32x4_t) __builtin_aarch64_ssubw2v8hi (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vsubw_high_s32 (int64x2_t __a, int32x4_t __b) +{ + return (int64x2_t) __builtin_aarch64_ssubw2v4si (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vsubw_high_u8 (uint16x8_t __a, uint8x16_t __b) +{ + return (uint16x8_t) __builtin_aarch64_usubw2v16qi ((int16x8_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vsubw_high_u16 (uint32x4_t __a, uint16x8_t __b) +{ + return (uint32x4_t) __builtin_aarch64_usubw2v8hi ((int32x4_t) __a, + (int16x8_t) __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vsubw_high_u32 (uint64x2_t __a, uint32x4_t __b) +{ + return (uint64x2_t) __builtin_aarch64_usubw2v4si ((int64x2_t) __a, + (int32x4_t) __b); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vqadd_s8 (int8x8_t __a, int8x8_t __b) +{ + return (int8x8_t) __builtin_aarch64_sqaddv8qi (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vqadd_s16 (int16x4_t __a, int16x4_t __b) +{ + return (int16x4_t) __builtin_aarch64_sqaddv4hi (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vqadd_s32 (int32x2_t __a, int32x2_t __b) +{ + return (int32x2_t) __builtin_aarch64_sqaddv2si (__a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vqadd_s64 (int64x1_t __a, int64x1_t __b) +{ + return (int64x1_t) __builtin_aarch64_sqadddi (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vqadd_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_uqaddv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vqadd_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_uqaddv4hi ((int16x4_t) __a, + (int16x4_t) __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vqadd_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_uqaddv2si ((int32x2_t) __a, + (int32x2_t) __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vqadd_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return (uint64x1_t) __builtin_aarch64_uqadddi ((int64x1_t) __a, + (int64x1_t) __b); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vqaddq_s8 (int8x16_t __a, int8x16_t __b) +{ + return (int8x16_t) __builtin_aarch64_sqaddv16qi (__a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vqaddq_s16 (int16x8_t __a, int16x8_t __b) +{ + return (int16x8_t) __builtin_aarch64_sqaddv8hi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqaddq_s32 (int32x4_t __a, int32x4_t __b) +{ + return (int32x4_t) __builtin_aarch64_sqaddv4si (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqaddq_s64 (int64x2_t __a, int64x2_t __b) +{ + return (int64x2_t) __builtin_aarch64_sqaddv2di (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vqaddq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_uqaddv16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vqaddq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_uqaddv8hi ((int16x8_t) __a, + (int16x8_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vqaddq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_uqaddv4si ((int32x4_t) __a, + (int32x4_t) __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vqaddq_u64 (uint64x2_t __a, uint64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_uqaddv2di ((int64x2_t) __a, + (int64x2_t) __b); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vqsub_s8 (int8x8_t __a, int8x8_t __b) +{ + return (int8x8_t) __builtin_aarch64_sqsubv8qi (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vqsub_s16 (int16x4_t __a, int16x4_t __b) +{ + return (int16x4_t) __builtin_aarch64_sqsubv4hi (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vqsub_s32 (int32x2_t __a, int32x2_t __b) +{ + return (int32x2_t) __builtin_aarch64_sqsubv2si (__a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vqsub_s64 (int64x1_t __a, int64x1_t __b) +{ + return (int64x1_t) __builtin_aarch64_sqsubdi (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vqsub_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_uqsubv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vqsub_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_uqsubv4hi ((int16x4_t) __a, + (int16x4_t) __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vqsub_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_uqsubv2si ((int32x2_t) __a, + (int32x2_t) __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vqsub_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return (uint64x1_t) __builtin_aarch64_uqsubdi ((int64x1_t) __a, + (int64x1_t) __b); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vqsubq_s8 (int8x16_t __a, int8x16_t __b) +{ + return (int8x16_t) __builtin_aarch64_sqsubv16qi (__a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vqsubq_s16 (int16x8_t __a, int16x8_t __b) +{ + return (int16x8_t) __builtin_aarch64_sqsubv8hi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqsubq_s32 (int32x4_t __a, int32x4_t __b) +{ + return (int32x4_t) __builtin_aarch64_sqsubv4si (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqsubq_s64 (int64x2_t __a, int64x2_t __b) +{ + return (int64x2_t) __builtin_aarch64_sqsubv2di (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vqsubq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_uqsubv16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vqsubq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_uqsubv8hi ((int16x8_t) __a, + (int16x8_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vqsubq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_uqsubv4si ((int32x4_t) __a, + (int32x4_t) __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vqsubq_u64 (uint64x2_t __a, uint64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_uqsubv2di ((int64x2_t) __a, + (int64x2_t) __b); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vqneg_s8 (int8x8_t __a) +{ + return (int8x8_t) __builtin_aarch64_sqnegv8qi (__a); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vqneg_s16 (int16x4_t __a) +{ + return (int16x4_t) __builtin_aarch64_sqnegv4hi (__a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vqneg_s32 (int32x2_t __a) +{ + return (int32x2_t) __builtin_aarch64_sqnegv2si (__a); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vqnegq_s8 (int8x16_t __a) +{ + return (int8x16_t) __builtin_aarch64_sqnegv16qi (__a); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vqnegq_s16 (int16x8_t __a) +{ + return (int16x8_t) __builtin_aarch64_sqnegv8hi (__a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqnegq_s32 (int32x4_t __a) +{ + return (int32x4_t) __builtin_aarch64_sqnegv4si (__a); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vqabs_s8 (int8x8_t __a) +{ + return (int8x8_t) __builtin_aarch64_sqabsv8qi (__a); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vqabs_s16 (int16x4_t __a) +{ + return (int16x4_t) __builtin_aarch64_sqabsv4hi (__a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vqabs_s32 (int32x2_t __a) +{ + return (int32x2_t) __builtin_aarch64_sqabsv2si (__a); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vqabsq_s8 (int8x16_t __a) +{ + return (int8x16_t) __builtin_aarch64_sqabsv16qi (__a); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vqabsq_s16 (int16x8_t __a) +{ + return (int16x8_t) __builtin_aarch64_sqabsv8hi (__a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqabsq_s32 (int32x4_t __a) +{ + return (int32x4_t) __builtin_aarch64_sqabsv4si (__a); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vqdmulh_s16 (int16x4_t __a, int16x4_t __b) +{ + return (int16x4_t) __builtin_aarch64_sqdmulhv4hi (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vqdmulh_s32 (int32x2_t __a, int32x2_t __b) +{ + return (int32x2_t) __builtin_aarch64_sqdmulhv2si (__a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vqdmulhq_s16 (int16x8_t __a, int16x8_t __b) +{ + return (int16x8_t) __builtin_aarch64_sqdmulhv8hi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmulhq_s32 (int32x4_t __a, int32x4_t __b) +{ + return (int32x4_t) __builtin_aarch64_sqdmulhv4si (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vqrdmulh_s16 (int16x4_t __a, int16x4_t __b) +{ + return (int16x4_t) __builtin_aarch64_sqrdmulhv4hi (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vqrdmulh_s32 (int32x2_t __a, int32x2_t __b) +{ + return (int32x2_t) __builtin_aarch64_sqrdmulhv2si (__a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vqrdmulhq_s16 (int16x8_t __a, int16x8_t __b) +{ + return (int16x8_t) __builtin_aarch64_sqrdmulhv8hi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqrdmulhq_s32 (int32x4_t __a, int32x4_t __b) +{ + return (int32x4_t) __builtin_aarch64_sqrdmulhv4si (__a, __b); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vcreate_s8 (uint64_t __a) +{ + return (int8x8_t) __a; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vcreate_s16 (uint64_t __a) +{ + return (int16x4_t) __a; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vcreate_s32 (uint64_t __a) +{ + return (int32x2_t) __a; +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vcreate_s64 (uint64_t __a) +{ + return (int64x1_t) __a; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vcreate_f32 (uint64_t __a) +{ + return (float32x2_t) __a; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vcreate_u8 (uint64_t __a) +{ + return (uint8x8_t) __a; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vcreate_u16 (uint64_t __a) +{ + return (uint16x4_t) __a; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcreate_u32 (uint64_t __a) +{ + return (uint32x2_t) __a; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcreate_u64 (uint64_t __a) +{ + return (uint64x1_t) __a; +} + +__extension__ static __inline float64x1_t __attribute__ ((__always_inline__)) +vcreate_f64 (uint64_t __a) +{ + return (float64x1_t) __builtin_aarch64_createdf (__a); +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vcreate_p8 (uint64_t __a) +{ + return (poly8x8_t) __a; +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vcreate_p16 (uint64_t __a) +{ + return (poly16x4_t) __a; +} + +/* vget_lane */ + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vget_lane_f32 (float32x2_t __a, const int __b) +{ + return __aarch64_vget_lane_f32 (__a, __b); +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vget_lane_f64 (float64x1_t __a, const int __b) +{ + return __aarch64_vget_lane_f64 (__a, __b); +} + +__extension__ static __inline poly8_t __attribute__ ((__always_inline__)) +vget_lane_p8 (poly8x8_t __a, const int __b) +{ + return __aarch64_vget_lane_p8 (__a, __b); +} + +__extension__ static __inline poly16_t __attribute__ ((__always_inline__)) +vget_lane_p16 (poly16x4_t __a, const int __b) +{ + return __aarch64_vget_lane_p16 (__a, __b); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vget_lane_s8 (int8x8_t __a, const int __b) +{ + return __aarch64_vget_lane_s8 (__a, __b); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vget_lane_s16 (int16x4_t __a, const int __b) +{ + return __aarch64_vget_lane_s16 (__a, __b); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vget_lane_s32 (int32x2_t __a, const int __b) +{ + return __aarch64_vget_lane_s32 (__a, __b); +} + +__extension__ static __inline int64_t __attribute__ ((__always_inline__)) +vget_lane_s64 (int64x1_t __a, const int __b) +{ + return __aarch64_vget_lane_s64 (__a, __b); +} + +__extension__ static __inline uint8_t __attribute__ ((__always_inline__)) +vget_lane_u8 (uint8x8_t __a, const int __b) +{ + return __aarch64_vget_lane_u8 (__a, __b); +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vget_lane_u16 (uint16x4_t __a, const int __b) +{ + return __aarch64_vget_lane_u16 (__a, __b); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vget_lane_u32 (uint32x2_t __a, const int __b) +{ + return __aarch64_vget_lane_u32 (__a, __b); +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vget_lane_u64 (uint64x1_t __a, const int __b) +{ + return __aarch64_vget_lane_u64 (__a, __b); +} + +/* vgetq_lane */ + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vgetq_lane_f32 (float32x4_t __a, const int __b) +{ + return __aarch64_vgetq_lane_f32 (__a, __b); +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vgetq_lane_f64 (float64x2_t __a, const int __b) +{ + return __aarch64_vgetq_lane_f64 (__a, __b); +} + +__extension__ static __inline poly8_t __attribute__ ((__always_inline__)) +vgetq_lane_p8 (poly8x16_t __a, const int __b) +{ + return __aarch64_vgetq_lane_p8 (__a, __b); +} + +__extension__ static __inline poly16_t __attribute__ ((__always_inline__)) +vgetq_lane_p16 (poly16x8_t __a, const int __b) +{ + return __aarch64_vgetq_lane_p16 (__a, __b); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vgetq_lane_s8 (int8x16_t __a, const int __b) +{ + return __aarch64_vgetq_lane_s8 (__a, __b); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vgetq_lane_s16 (int16x8_t __a, const int __b) +{ + return __aarch64_vgetq_lane_s16 (__a, __b); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vgetq_lane_s32 (int32x4_t __a, const int __b) +{ + return __aarch64_vgetq_lane_s32 (__a, __b); +} + +__extension__ static __inline int64_t __attribute__ ((__always_inline__)) +vgetq_lane_s64 (int64x2_t __a, const int __b) +{ + return __aarch64_vgetq_lane_s64 (__a, __b); +} + +__extension__ static __inline uint8_t __attribute__ ((__always_inline__)) +vgetq_lane_u8 (uint8x16_t __a, const int __b) +{ + return __aarch64_vgetq_lane_u8 (__a, __b); +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vgetq_lane_u16 (uint16x8_t __a, const int __b) +{ + return __aarch64_vgetq_lane_u16 (__a, __b); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vgetq_lane_u32 (uint32x4_t __a, const int __b) +{ + return __aarch64_vgetq_lane_u32 (__a, __b); +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vgetq_lane_u64 (uint64x2_t __a, const int __b) +{ + return __aarch64_vgetq_lane_u64 (__a, __b); +} + +/* vreinterpret */ + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vreinterpret_p8_s8 (int8x8_t __a) +{ + return (poly8x8_t) __builtin_aarch64_reinterpretv8qiv8qi (__a); +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vreinterpret_p8_s16 (int16x4_t __a) +{ + return (poly8x8_t) __builtin_aarch64_reinterpretv8qiv4hi (__a); +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vreinterpret_p8_s32 (int32x2_t __a) +{ + return (poly8x8_t) __builtin_aarch64_reinterpretv8qiv2si (__a); +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vreinterpret_p8_s64 (int64x1_t __a) +{ + return (poly8x8_t) __builtin_aarch64_reinterpretv8qidi (__a); +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vreinterpret_p8_f32 (float32x2_t __a) +{ + return (poly8x8_t) __builtin_aarch64_reinterpretv8qiv2sf (__a); +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vreinterpret_p8_u8 (uint8x8_t __a) +{ + return (poly8x8_t) __builtin_aarch64_reinterpretv8qiv8qi ((int8x8_t) __a); +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vreinterpret_p8_u16 (uint16x4_t __a) +{ + return (poly8x8_t) __builtin_aarch64_reinterpretv8qiv4hi ((int16x4_t) __a); +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vreinterpret_p8_u32 (uint32x2_t __a) +{ + return (poly8x8_t) __builtin_aarch64_reinterpretv8qiv2si ((int32x2_t) __a); +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vreinterpret_p8_u64 (uint64x1_t __a) +{ + return (poly8x8_t) __builtin_aarch64_reinterpretv8qidi ((int64x1_t) __a); +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vreinterpret_p8_p16 (poly16x4_t __a) +{ + return (poly8x8_t) __builtin_aarch64_reinterpretv8qiv4hi ((int16x4_t) __a); +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_p8_s8 (int8x16_t __a) +{ + return (poly8x16_t) __builtin_aarch64_reinterpretv16qiv16qi (__a); +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_p8_s16 (int16x8_t __a) +{ + return (poly8x16_t) __builtin_aarch64_reinterpretv16qiv8hi (__a); +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_p8_s32 (int32x4_t __a) +{ + return (poly8x16_t) __builtin_aarch64_reinterpretv16qiv4si (__a); +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_p8_s64 (int64x2_t __a) +{ + return (poly8x16_t) __builtin_aarch64_reinterpretv16qiv2di (__a); +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_p8_f32 (float32x4_t __a) +{ + return (poly8x16_t) __builtin_aarch64_reinterpretv16qiv4sf (__a); +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_p8_u8 (uint8x16_t __a) +{ + return (poly8x16_t) __builtin_aarch64_reinterpretv16qiv16qi ((int8x16_t) + __a); +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_p8_u16 (uint16x8_t __a) +{ + return (poly8x16_t) __builtin_aarch64_reinterpretv16qiv8hi ((int16x8_t) + __a); +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_p8_u32 (uint32x4_t __a) +{ + return (poly8x16_t) __builtin_aarch64_reinterpretv16qiv4si ((int32x4_t) + __a); +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_p8_u64 (uint64x2_t __a) +{ + return (poly8x16_t) __builtin_aarch64_reinterpretv16qiv2di ((int64x2_t) + __a); +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_p8_p16 (poly16x8_t __a) +{ + return (poly8x16_t) __builtin_aarch64_reinterpretv16qiv8hi ((int16x8_t) + __a); +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vreinterpret_p16_s8 (int8x8_t __a) +{ + return (poly16x4_t) __builtin_aarch64_reinterpretv4hiv8qi (__a); +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vreinterpret_p16_s16 (int16x4_t __a) +{ + return (poly16x4_t) __builtin_aarch64_reinterpretv4hiv4hi (__a); +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vreinterpret_p16_s32 (int32x2_t __a) +{ + return (poly16x4_t) __builtin_aarch64_reinterpretv4hiv2si (__a); +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vreinterpret_p16_s64 (int64x1_t __a) +{ + return (poly16x4_t) __builtin_aarch64_reinterpretv4hidi (__a); +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vreinterpret_p16_f32 (float32x2_t __a) +{ + return (poly16x4_t) __builtin_aarch64_reinterpretv4hiv2sf (__a); +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vreinterpret_p16_u8 (uint8x8_t __a) +{ + return (poly16x4_t) __builtin_aarch64_reinterpretv4hiv8qi ((int8x8_t) __a); +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vreinterpret_p16_u16 (uint16x4_t __a) +{ + return (poly16x4_t) __builtin_aarch64_reinterpretv4hiv4hi ((int16x4_t) __a); +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vreinterpret_p16_u32 (uint32x2_t __a) +{ + return (poly16x4_t) __builtin_aarch64_reinterpretv4hiv2si ((int32x2_t) __a); +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vreinterpret_p16_u64 (uint64x1_t __a) +{ + return (poly16x4_t) __builtin_aarch64_reinterpretv4hidi ((int64x1_t) __a); +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vreinterpret_p16_p8 (poly8x8_t __a) +{ + return (poly16x4_t) __builtin_aarch64_reinterpretv4hiv8qi ((int8x8_t) __a); +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_p16_s8 (int8x16_t __a) +{ + return (poly16x8_t) __builtin_aarch64_reinterpretv8hiv16qi (__a); +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_p16_s16 (int16x8_t __a) +{ + return (poly16x8_t) __builtin_aarch64_reinterpretv8hiv8hi (__a); +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_p16_s32 (int32x4_t __a) +{ + return (poly16x8_t) __builtin_aarch64_reinterpretv8hiv4si (__a); +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_p16_s64 (int64x2_t __a) +{ + return (poly16x8_t) __builtin_aarch64_reinterpretv8hiv2di (__a); +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_p16_f32 (float32x4_t __a) +{ + return (poly16x8_t) __builtin_aarch64_reinterpretv8hiv4sf (__a); +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_p16_u8 (uint8x16_t __a) +{ + return (poly16x8_t) __builtin_aarch64_reinterpretv8hiv16qi ((int8x16_t) + __a); +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_p16_u16 (uint16x8_t __a) +{ + return (poly16x8_t) __builtin_aarch64_reinterpretv8hiv8hi ((int16x8_t) __a); +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_p16_u32 (uint32x4_t __a) +{ + return (poly16x8_t) __builtin_aarch64_reinterpretv8hiv4si ((int32x4_t) __a); +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_p16_u64 (uint64x2_t __a) +{ + return (poly16x8_t) __builtin_aarch64_reinterpretv8hiv2di ((int64x2_t) __a); +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_p16_p8 (poly8x16_t __a) +{ + return (poly16x8_t) __builtin_aarch64_reinterpretv8hiv16qi ((int8x16_t) + __a); +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vreinterpret_f32_s8 (int8x8_t __a) +{ + return (float32x2_t) __builtin_aarch64_reinterpretv2sfv8qi (__a); +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vreinterpret_f32_s16 (int16x4_t __a) +{ + return (float32x2_t) __builtin_aarch64_reinterpretv2sfv4hi (__a); +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vreinterpret_f32_s32 (int32x2_t __a) +{ + return (float32x2_t) __builtin_aarch64_reinterpretv2sfv2si (__a); +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vreinterpret_f32_s64 (int64x1_t __a) +{ + return (float32x2_t) __builtin_aarch64_reinterpretv2sfdi (__a); +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vreinterpret_f32_u8 (uint8x8_t __a) +{ + return (float32x2_t) __builtin_aarch64_reinterpretv2sfv8qi ((int8x8_t) __a); +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vreinterpret_f32_u16 (uint16x4_t __a) +{ + return (float32x2_t) __builtin_aarch64_reinterpretv2sfv4hi ((int16x4_t) + __a); +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vreinterpret_f32_u32 (uint32x2_t __a) +{ + return (float32x2_t) __builtin_aarch64_reinterpretv2sfv2si ((int32x2_t) + __a); +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vreinterpret_f32_u64 (uint64x1_t __a) +{ + return (float32x2_t) __builtin_aarch64_reinterpretv2sfdi ((int64x1_t) __a); +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vreinterpret_f32_p8 (poly8x8_t __a) +{ + return (float32x2_t) __builtin_aarch64_reinterpretv2sfv8qi ((int8x8_t) __a); +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vreinterpret_f32_p16 (poly16x4_t __a) +{ + return (float32x2_t) __builtin_aarch64_reinterpretv2sfv4hi ((int16x4_t) + __a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_f32_s8 (int8x16_t __a) +{ + return (float32x4_t) __builtin_aarch64_reinterpretv4sfv16qi (__a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_f32_s16 (int16x8_t __a) +{ + return (float32x4_t) __builtin_aarch64_reinterpretv4sfv8hi (__a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_f32_s32 (int32x4_t __a) +{ + return (float32x4_t) __builtin_aarch64_reinterpretv4sfv4si (__a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_f32_s64 (int64x2_t __a) +{ + return (float32x4_t) __builtin_aarch64_reinterpretv4sfv2di (__a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_f32_u8 (uint8x16_t __a) +{ + return (float32x4_t) __builtin_aarch64_reinterpretv4sfv16qi ((int8x16_t) + __a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_f32_u16 (uint16x8_t __a) +{ + return (float32x4_t) __builtin_aarch64_reinterpretv4sfv8hi ((int16x8_t) + __a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_f32_u32 (uint32x4_t __a) +{ + return (float32x4_t) __builtin_aarch64_reinterpretv4sfv4si ((int32x4_t) + __a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_f32_u64 (uint64x2_t __a) +{ + return (float32x4_t) __builtin_aarch64_reinterpretv4sfv2di ((int64x2_t) + __a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_f32_p8 (poly8x16_t __a) +{ + return (float32x4_t) __builtin_aarch64_reinterpretv4sfv16qi ((int8x16_t) + __a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_f32_p16 (poly16x8_t __a) +{ + return (float32x4_t) __builtin_aarch64_reinterpretv4sfv8hi ((int16x8_t) + __a); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vreinterpret_s64_s8 (int8x8_t __a) +{ + return (int64x1_t) __builtin_aarch64_reinterpretdiv8qi (__a); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vreinterpret_s64_s16 (int16x4_t __a) +{ + return (int64x1_t) __builtin_aarch64_reinterpretdiv4hi (__a); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vreinterpret_s64_s32 (int32x2_t __a) +{ + return (int64x1_t) __builtin_aarch64_reinterpretdiv2si (__a); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vreinterpret_s64_f32 (float32x2_t __a) +{ + return (int64x1_t) __builtin_aarch64_reinterpretdiv2sf (__a); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vreinterpret_s64_u8 (uint8x8_t __a) +{ + return (int64x1_t) __builtin_aarch64_reinterpretdiv8qi ((int8x8_t) __a); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vreinterpret_s64_u16 (uint16x4_t __a) +{ + return (int64x1_t) __builtin_aarch64_reinterpretdiv4hi ((int16x4_t) __a); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vreinterpret_s64_u32 (uint32x2_t __a) +{ + return (int64x1_t) __builtin_aarch64_reinterpretdiv2si ((int32x2_t) __a); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vreinterpret_s64_u64 (uint64x1_t __a) +{ + return (int64x1_t) __builtin_aarch64_reinterpretdidi ((int64x1_t) __a); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vreinterpret_s64_p8 (poly8x8_t __a) +{ + return (int64x1_t) __builtin_aarch64_reinterpretdiv8qi ((int8x8_t) __a); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vreinterpret_s64_p16 (poly16x4_t __a) +{ + return (int64x1_t) __builtin_aarch64_reinterpretdiv4hi ((int16x4_t) __a); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vreinterpretq_s64_s8 (int8x16_t __a) +{ + return (int64x2_t) __builtin_aarch64_reinterpretv2div16qi (__a); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vreinterpretq_s64_s16 (int16x8_t __a) +{ + return (int64x2_t) __builtin_aarch64_reinterpretv2div8hi (__a); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vreinterpretq_s64_s32 (int32x4_t __a) +{ + return (int64x2_t) __builtin_aarch64_reinterpretv2div4si (__a); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vreinterpretq_s64_f32 (float32x4_t __a) +{ + return (int64x2_t) __builtin_aarch64_reinterpretv2div4sf (__a); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vreinterpretq_s64_u8 (uint8x16_t __a) +{ + return (int64x2_t) __builtin_aarch64_reinterpretv2div16qi ((int8x16_t) __a); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vreinterpretq_s64_u16 (uint16x8_t __a) +{ + return (int64x2_t) __builtin_aarch64_reinterpretv2div8hi ((int16x8_t) __a); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vreinterpretq_s64_u32 (uint32x4_t __a) +{ + return (int64x2_t) __builtin_aarch64_reinterpretv2div4si ((int32x4_t) __a); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vreinterpretq_s64_u64 (uint64x2_t __a) +{ + return (int64x2_t) __builtin_aarch64_reinterpretv2div2di ((int64x2_t) __a); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vreinterpretq_s64_p8 (poly8x16_t __a) +{ + return (int64x2_t) __builtin_aarch64_reinterpretv2div16qi ((int8x16_t) __a); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vreinterpretq_s64_p16 (poly16x8_t __a) +{ + return (int64x2_t) __builtin_aarch64_reinterpretv2div8hi ((int16x8_t) __a); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vreinterpret_u64_s8 (int8x8_t __a) +{ + return (uint64x1_t) __builtin_aarch64_reinterpretdiv8qi (__a); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vreinterpret_u64_s16 (int16x4_t __a) +{ + return (uint64x1_t) __builtin_aarch64_reinterpretdiv4hi (__a); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vreinterpret_u64_s32 (int32x2_t __a) +{ + return (uint64x1_t) __builtin_aarch64_reinterpretdiv2si (__a); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vreinterpret_u64_s64 (int64x1_t __a) +{ + return (uint64x1_t) __builtin_aarch64_reinterpretdidi (__a); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vreinterpret_u64_f32 (float32x2_t __a) +{ + return (uint64x1_t) __builtin_aarch64_reinterpretdiv2sf (__a); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vreinterpret_u64_u8 (uint8x8_t __a) +{ + return (uint64x1_t) __builtin_aarch64_reinterpretdiv8qi ((int8x8_t) __a); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vreinterpret_u64_u16 (uint16x4_t __a) +{ + return (uint64x1_t) __builtin_aarch64_reinterpretdiv4hi ((int16x4_t) __a); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vreinterpret_u64_u32 (uint32x2_t __a) +{ + return (uint64x1_t) __builtin_aarch64_reinterpretdiv2si ((int32x2_t) __a); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vreinterpret_u64_p8 (poly8x8_t __a) +{ + return (uint64x1_t) __builtin_aarch64_reinterpretdiv8qi ((int8x8_t) __a); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vreinterpret_u64_p16 (poly16x4_t __a) +{ + return (uint64x1_t) __builtin_aarch64_reinterpretdiv4hi ((int16x4_t) __a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vreinterpretq_u64_s8 (int8x16_t __a) +{ + return (uint64x2_t) __builtin_aarch64_reinterpretv2div16qi (__a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vreinterpretq_u64_s16 (int16x8_t __a) +{ + return (uint64x2_t) __builtin_aarch64_reinterpretv2div8hi (__a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vreinterpretq_u64_s32 (int32x4_t __a) +{ + return (uint64x2_t) __builtin_aarch64_reinterpretv2div4si (__a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vreinterpretq_u64_s64 (int64x2_t __a) +{ + return (uint64x2_t) __builtin_aarch64_reinterpretv2div2di (__a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vreinterpretq_u64_f32 (float32x4_t __a) +{ + return (uint64x2_t) __builtin_aarch64_reinterpretv2div4sf (__a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vreinterpretq_u64_u8 (uint8x16_t __a) +{ + return (uint64x2_t) __builtin_aarch64_reinterpretv2div16qi ((int8x16_t) + __a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vreinterpretq_u64_u16 (uint16x8_t __a) +{ + return (uint64x2_t) __builtin_aarch64_reinterpretv2div8hi ((int16x8_t) __a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vreinterpretq_u64_u32 (uint32x4_t __a) +{ + return (uint64x2_t) __builtin_aarch64_reinterpretv2div4si ((int32x4_t) __a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vreinterpretq_u64_p8 (poly8x16_t __a) +{ + return (uint64x2_t) __builtin_aarch64_reinterpretv2div16qi ((int8x16_t) + __a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vreinterpretq_u64_p16 (poly16x8_t __a) +{ + return (uint64x2_t) __builtin_aarch64_reinterpretv2div8hi ((int16x8_t) __a); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vreinterpret_s8_s16 (int16x4_t __a) +{ + return (int8x8_t) __builtin_aarch64_reinterpretv8qiv4hi (__a); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vreinterpret_s8_s32 (int32x2_t __a) +{ + return (int8x8_t) __builtin_aarch64_reinterpretv8qiv2si (__a); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vreinterpret_s8_s64 (int64x1_t __a) +{ + return (int8x8_t) __builtin_aarch64_reinterpretv8qidi (__a); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vreinterpret_s8_f32 (float32x2_t __a) +{ + return (int8x8_t) __builtin_aarch64_reinterpretv8qiv2sf (__a); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vreinterpret_s8_u8 (uint8x8_t __a) +{ + return (int8x8_t) __builtin_aarch64_reinterpretv8qiv8qi ((int8x8_t) __a); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vreinterpret_s8_u16 (uint16x4_t __a) +{ + return (int8x8_t) __builtin_aarch64_reinterpretv8qiv4hi ((int16x4_t) __a); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vreinterpret_s8_u32 (uint32x2_t __a) +{ + return (int8x8_t) __builtin_aarch64_reinterpretv8qiv2si ((int32x2_t) __a); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vreinterpret_s8_u64 (uint64x1_t __a) +{ + return (int8x8_t) __builtin_aarch64_reinterpretv8qidi ((int64x1_t) __a); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vreinterpret_s8_p8 (poly8x8_t __a) +{ + return (int8x8_t) __builtin_aarch64_reinterpretv8qiv8qi ((int8x8_t) __a); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vreinterpret_s8_p16 (poly16x4_t __a) +{ + return (int8x8_t) __builtin_aarch64_reinterpretv8qiv4hi ((int16x4_t) __a); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_s8_s16 (int16x8_t __a) +{ + return (int8x16_t) __builtin_aarch64_reinterpretv16qiv8hi (__a); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_s8_s32 (int32x4_t __a) +{ + return (int8x16_t) __builtin_aarch64_reinterpretv16qiv4si (__a); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_s8_s64 (int64x2_t __a) +{ + return (int8x16_t) __builtin_aarch64_reinterpretv16qiv2di (__a); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_s8_f32 (float32x4_t __a) +{ + return (int8x16_t) __builtin_aarch64_reinterpretv16qiv4sf (__a); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_s8_u8 (uint8x16_t __a) +{ + return (int8x16_t) __builtin_aarch64_reinterpretv16qiv16qi ((int8x16_t) + __a); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_s8_u16 (uint16x8_t __a) +{ + return (int8x16_t) __builtin_aarch64_reinterpretv16qiv8hi ((int16x8_t) __a); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_s8_u32 (uint32x4_t __a) +{ + return (int8x16_t) __builtin_aarch64_reinterpretv16qiv4si ((int32x4_t) __a); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_s8_u64 (uint64x2_t __a) +{ + return (int8x16_t) __builtin_aarch64_reinterpretv16qiv2di ((int64x2_t) __a); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_s8_p8 (poly8x16_t __a) +{ + return (int8x16_t) __builtin_aarch64_reinterpretv16qiv16qi ((int8x16_t) + __a); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_s8_p16 (poly16x8_t __a) +{ + return (int8x16_t) __builtin_aarch64_reinterpretv16qiv8hi ((int16x8_t) __a); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vreinterpret_s16_s8 (int8x8_t __a) +{ + return (int16x4_t) __builtin_aarch64_reinterpretv4hiv8qi (__a); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vreinterpret_s16_s32 (int32x2_t __a) +{ + return (int16x4_t) __builtin_aarch64_reinterpretv4hiv2si (__a); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vreinterpret_s16_s64 (int64x1_t __a) +{ + return (int16x4_t) __builtin_aarch64_reinterpretv4hidi (__a); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vreinterpret_s16_f32 (float32x2_t __a) +{ + return (int16x4_t) __builtin_aarch64_reinterpretv4hiv2sf (__a); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vreinterpret_s16_u8 (uint8x8_t __a) +{ + return (int16x4_t) __builtin_aarch64_reinterpretv4hiv8qi ((int8x8_t) __a); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vreinterpret_s16_u16 (uint16x4_t __a) +{ + return (int16x4_t) __builtin_aarch64_reinterpretv4hiv4hi ((int16x4_t) __a); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vreinterpret_s16_u32 (uint32x2_t __a) +{ + return (int16x4_t) __builtin_aarch64_reinterpretv4hiv2si ((int32x2_t) __a); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vreinterpret_s16_u64 (uint64x1_t __a) +{ + return (int16x4_t) __builtin_aarch64_reinterpretv4hidi ((int64x1_t) __a); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vreinterpret_s16_p8 (poly8x8_t __a) +{ + return (int16x4_t) __builtin_aarch64_reinterpretv4hiv8qi ((int8x8_t) __a); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vreinterpret_s16_p16 (poly16x4_t __a) +{ + return (int16x4_t) __builtin_aarch64_reinterpretv4hiv4hi ((int16x4_t) __a); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_s16_s8 (int8x16_t __a) +{ + return (int16x8_t) __builtin_aarch64_reinterpretv8hiv16qi (__a); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_s16_s32 (int32x4_t __a) +{ + return (int16x8_t) __builtin_aarch64_reinterpretv8hiv4si (__a); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_s16_s64 (int64x2_t __a) +{ + return (int16x8_t) __builtin_aarch64_reinterpretv8hiv2di (__a); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_s16_f32 (float32x4_t __a) +{ + return (int16x8_t) __builtin_aarch64_reinterpretv8hiv4sf (__a); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_s16_u8 (uint8x16_t __a) +{ + return (int16x8_t) __builtin_aarch64_reinterpretv8hiv16qi ((int8x16_t) __a); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_s16_u16 (uint16x8_t __a) +{ + return (int16x8_t) __builtin_aarch64_reinterpretv8hiv8hi ((int16x8_t) __a); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_s16_u32 (uint32x4_t __a) +{ + return (int16x8_t) __builtin_aarch64_reinterpretv8hiv4si ((int32x4_t) __a); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_s16_u64 (uint64x2_t __a) +{ + return (int16x8_t) __builtin_aarch64_reinterpretv8hiv2di ((int64x2_t) __a); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_s16_p8 (poly8x16_t __a) +{ + return (int16x8_t) __builtin_aarch64_reinterpretv8hiv16qi ((int8x16_t) __a); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_s16_p16 (poly16x8_t __a) +{ + return (int16x8_t) __builtin_aarch64_reinterpretv8hiv8hi ((int16x8_t) __a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vreinterpret_s32_s8 (int8x8_t __a) +{ + return (int32x2_t) __builtin_aarch64_reinterpretv2siv8qi (__a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vreinterpret_s32_s16 (int16x4_t __a) +{ + return (int32x2_t) __builtin_aarch64_reinterpretv2siv4hi (__a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vreinterpret_s32_s64 (int64x1_t __a) +{ + return (int32x2_t) __builtin_aarch64_reinterpretv2sidi (__a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vreinterpret_s32_f32 (float32x2_t __a) +{ + return (int32x2_t) __builtin_aarch64_reinterpretv2siv2sf (__a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vreinterpret_s32_u8 (uint8x8_t __a) +{ + return (int32x2_t) __builtin_aarch64_reinterpretv2siv8qi ((int8x8_t) __a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vreinterpret_s32_u16 (uint16x4_t __a) +{ + return (int32x2_t) __builtin_aarch64_reinterpretv2siv4hi ((int16x4_t) __a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vreinterpret_s32_u32 (uint32x2_t __a) +{ + return (int32x2_t) __builtin_aarch64_reinterpretv2siv2si ((int32x2_t) __a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vreinterpret_s32_u64 (uint64x1_t __a) +{ + return (int32x2_t) __builtin_aarch64_reinterpretv2sidi ((int64x1_t) __a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vreinterpret_s32_p8 (poly8x8_t __a) +{ + return (int32x2_t) __builtin_aarch64_reinterpretv2siv8qi ((int8x8_t) __a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vreinterpret_s32_p16 (poly16x4_t __a) +{ + return (int32x2_t) __builtin_aarch64_reinterpretv2siv4hi ((int16x4_t) __a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_s32_s8 (int8x16_t __a) +{ + return (int32x4_t) __builtin_aarch64_reinterpretv4siv16qi (__a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_s32_s16 (int16x8_t __a) +{ + return (int32x4_t) __builtin_aarch64_reinterpretv4siv8hi (__a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_s32_s64 (int64x2_t __a) +{ + return (int32x4_t) __builtin_aarch64_reinterpretv4siv2di (__a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_s32_f32 (float32x4_t __a) +{ + return (int32x4_t) __builtin_aarch64_reinterpretv4siv4sf (__a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_s32_u8 (uint8x16_t __a) +{ + return (int32x4_t) __builtin_aarch64_reinterpretv4siv16qi ((int8x16_t) __a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_s32_u16 (uint16x8_t __a) +{ + return (int32x4_t) __builtin_aarch64_reinterpretv4siv8hi ((int16x8_t) __a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_s32_u32 (uint32x4_t __a) +{ + return (int32x4_t) __builtin_aarch64_reinterpretv4siv4si ((int32x4_t) __a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_s32_u64 (uint64x2_t __a) +{ + return (int32x4_t) __builtin_aarch64_reinterpretv4siv2di ((int64x2_t) __a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_s32_p8 (poly8x16_t __a) +{ + return (int32x4_t) __builtin_aarch64_reinterpretv4siv16qi ((int8x16_t) __a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_s32_p16 (poly16x8_t __a) +{ + return (int32x4_t) __builtin_aarch64_reinterpretv4siv8hi ((int16x8_t) __a); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vreinterpret_u8_s8 (int8x8_t __a) +{ + return (uint8x8_t) __builtin_aarch64_reinterpretv8qiv8qi (__a); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vreinterpret_u8_s16 (int16x4_t __a) +{ + return (uint8x8_t) __builtin_aarch64_reinterpretv8qiv4hi (__a); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vreinterpret_u8_s32 (int32x2_t __a) +{ + return (uint8x8_t) __builtin_aarch64_reinterpretv8qiv2si (__a); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vreinterpret_u8_s64 (int64x1_t __a) +{ + return (uint8x8_t) __builtin_aarch64_reinterpretv8qidi (__a); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vreinterpret_u8_f32 (float32x2_t __a) +{ + return (uint8x8_t) __builtin_aarch64_reinterpretv8qiv2sf (__a); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vreinterpret_u8_u16 (uint16x4_t __a) +{ + return (uint8x8_t) __builtin_aarch64_reinterpretv8qiv4hi ((int16x4_t) __a); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vreinterpret_u8_u32 (uint32x2_t __a) +{ + return (uint8x8_t) __builtin_aarch64_reinterpretv8qiv2si ((int32x2_t) __a); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vreinterpret_u8_u64 (uint64x1_t __a) +{ + return (uint8x8_t) __builtin_aarch64_reinterpretv8qidi ((int64x1_t) __a); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vreinterpret_u8_p8 (poly8x8_t __a) +{ + return (uint8x8_t) __builtin_aarch64_reinterpretv8qiv8qi ((int8x8_t) __a); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vreinterpret_u8_p16 (poly16x4_t __a) +{ + return (uint8x8_t) __builtin_aarch64_reinterpretv8qiv4hi ((int16x4_t) __a); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_u8_s8 (int8x16_t __a) +{ + return (uint8x16_t) __builtin_aarch64_reinterpretv16qiv16qi (__a); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_u8_s16 (int16x8_t __a) +{ + return (uint8x16_t) __builtin_aarch64_reinterpretv16qiv8hi (__a); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_u8_s32 (int32x4_t __a) +{ + return (uint8x16_t) __builtin_aarch64_reinterpretv16qiv4si (__a); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_u8_s64 (int64x2_t __a) +{ + return (uint8x16_t) __builtin_aarch64_reinterpretv16qiv2di (__a); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_u8_f32 (float32x4_t __a) +{ + return (uint8x16_t) __builtin_aarch64_reinterpretv16qiv4sf (__a); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_u8_u16 (uint16x8_t __a) +{ + return (uint8x16_t) __builtin_aarch64_reinterpretv16qiv8hi ((int16x8_t) + __a); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_u8_u32 (uint32x4_t __a) +{ + return (uint8x16_t) __builtin_aarch64_reinterpretv16qiv4si ((int32x4_t) + __a); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_u8_u64 (uint64x2_t __a) +{ + return (uint8x16_t) __builtin_aarch64_reinterpretv16qiv2di ((int64x2_t) + __a); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_u8_p8 (poly8x16_t __a) +{ + return (uint8x16_t) __builtin_aarch64_reinterpretv16qiv16qi ((int8x16_t) + __a); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vreinterpretq_u8_p16 (poly16x8_t __a) +{ + return (uint8x16_t) __builtin_aarch64_reinterpretv16qiv8hi ((int16x8_t) + __a); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vreinterpret_u16_s8 (int8x8_t __a) +{ + return (uint16x4_t) __builtin_aarch64_reinterpretv4hiv8qi (__a); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vreinterpret_u16_s16 (int16x4_t __a) +{ + return (uint16x4_t) __builtin_aarch64_reinterpretv4hiv4hi (__a); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vreinterpret_u16_s32 (int32x2_t __a) +{ + return (uint16x4_t) __builtin_aarch64_reinterpretv4hiv2si (__a); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vreinterpret_u16_s64 (int64x1_t __a) +{ + return (uint16x4_t) __builtin_aarch64_reinterpretv4hidi (__a); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vreinterpret_u16_f32 (float32x2_t __a) +{ + return (uint16x4_t) __builtin_aarch64_reinterpretv4hiv2sf (__a); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vreinterpret_u16_u8 (uint8x8_t __a) +{ + return (uint16x4_t) __builtin_aarch64_reinterpretv4hiv8qi ((int8x8_t) __a); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vreinterpret_u16_u32 (uint32x2_t __a) +{ + return (uint16x4_t) __builtin_aarch64_reinterpretv4hiv2si ((int32x2_t) __a); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vreinterpret_u16_u64 (uint64x1_t __a) +{ + return (uint16x4_t) __builtin_aarch64_reinterpretv4hidi ((int64x1_t) __a); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vreinterpret_u16_p8 (poly8x8_t __a) +{ + return (uint16x4_t) __builtin_aarch64_reinterpretv4hiv8qi ((int8x8_t) __a); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vreinterpret_u16_p16 (poly16x4_t __a) +{ + return (uint16x4_t) __builtin_aarch64_reinterpretv4hiv4hi ((int16x4_t) __a); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_u16_s8 (int8x16_t __a) +{ + return (uint16x8_t) __builtin_aarch64_reinterpretv8hiv16qi (__a); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_u16_s16 (int16x8_t __a) +{ + return (uint16x8_t) __builtin_aarch64_reinterpretv8hiv8hi (__a); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_u16_s32 (int32x4_t __a) +{ + return (uint16x8_t) __builtin_aarch64_reinterpretv8hiv4si (__a); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_u16_s64 (int64x2_t __a) +{ + return (uint16x8_t) __builtin_aarch64_reinterpretv8hiv2di (__a); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_u16_f32 (float32x4_t __a) +{ + return (uint16x8_t) __builtin_aarch64_reinterpretv8hiv4sf (__a); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_u16_u8 (uint8x16_t __a) +{ + return (uint16x8_t) __builtin_aarch64_reinterpretv8hiv16qi ((int8x16_t) + __a); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_u16_u32 (uint32x4_t __a) +{ + return (uint16x8_t) __builtin_aarch64_reinterpretv8hiv4si ((int32x4_t) __a); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_u16_u64 (uint64x2_t __a) +{ + return (uint16x8_t) __builtin_aarch64_reinterpretv8hiv2di ((int64x2_t) __a); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_u16_p8 (poly8x16_t __a) +{ + return (uint16x8_t) __builtin_aarch64_reinterpretv8hiv16qi ((int8x16_t) + __a); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vreinterpretq_u16_p16 (poly16x8_t __a) +{ + return (uint16x8_t) __builtin_aarch64_reinterpretv8hiv8hi ((int16x8_t) __a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vreinterpret_u32_s8 (int8x8_t __a) +{ + return (uint32x2_t) __builtin_aarch64_reinterpretv2siv8qi (__a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vreinterpret_u32_s16 (int16x4_t __a) +{ + return (uint32x2_t) __builtin_aarch64_reinterpretv2siv4hi (__a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vreinterpret_u32_s32 (int32x2_t __a) +{ + return (uint32x2_t) __builtin_aarch64_reinterpretv2siv2si (__a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vreinterpret_u32_s64 (int64x1_t __a) +{ + return (uint32x2_t) __builtin_aarch64_reinterpretv2sidi (__a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vreinterpret_u32_f32 (float32x2_t __a) +{ + return (uint32x2_t) __builtin_aarch64_reinterpretv2siv2sf (__a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vreinterpret_u32_u8 (uint8x8_t __a) +{ + return (uint32x2_t) __builtin_aarch64_reinterpretv2siv8qi ((int8x8_t) __a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vreinterpret_u32_u16 (uint16x4_t __a) +{ + return (uint32x2_t) __builtin_aarch64_reinterpretv2siv4hi ((int16x4_t) __a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vreinterpret_u32_u64 (uint64x1_t __a) +{ + return (uint32x2_t) __builtin_aarch64_reinterpretv2sidi ((int64x1_t) __a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vreinterpret_u32_p8 (poly8x8_t __a) +{ + return (uint32x2_t) __builtin_aarch64_reinterpretv2siv8qi ((int8x8_t) __a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vreinterpret_u32_p16 (poly16x4_t __a) +{ + return (uint32x2_t) __builtin_aarch64_reinterpretv2siv4hi ((int16x4_t) __a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_u32_s8 (int8x16_t __a) +{ + return (uint32x4_t) __builtin_aarch64_reinterpretv4siv16qi (__a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_u32_s16 (int16x8_t __a) +{ + return (uint32x4_t) __builtin_aarch64_reinterpretv4siv8hi (__a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_u32_s32 (int32x4_t __a) +{ + return (uint32x4_t) __builtin_aarch64_reinterpretv4siv4si (__a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_u32_s64 (int64x2_t __a) +{ + return (uint32x4_t) __builtin_aarch64_reinterpretv4siv2di (__a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_u32_f32 (float32x4_t __a) +{ + return (uint32x4_t) __builtin_aarch64_reinterpretv4siv4sf (__a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_u32_u8 (uint8x16_t __a) +{ + return (uint32x4_t) __builtin_aarch64_reinterpretv4siv16qi ((int8x16_t) + __a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_u32_u16 (uint16x8_t __a) +{ + return (uint32x4_t) __builtin_aarch64_reinterpretv4siv8hi ((int16x8_t) __a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_u32_u64 (uint64x2_t __a) +{ + return (uint32x4_t) __builtin_aarch64_reinterpretv4siv2di ((int64x2_t) __a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_u32_p8 (poly8x16_t __a) +{ + return (uint32x4_t) __builtin_aarch64_reinterpretv4siv16qi ((int8x16_t) + __a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vreinterpretq_u32_p16 (poly16x8_t __a) +{ + return (uint32x4_t) __builtin_aarch64_reinterpretv4siv8hi ((int16x8_t) __a); +} + +#define __GET_LOW(__TYPE) \ + uint64x2_t tmp = vreinterpretq_u64_##__TYPE (__a); \ + uint64_t lo = vgetq_lane_u64 (tmp, 0); \ + return vreinterpret_##__TYPE##_u64 (lo); + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vget_low_f32 (float32x4_t __a) +{ + __GET_LOW (f32); +} + +__extension__ static __inline float64x1_t __attribute__ ((__always_inline__)) +vget_low_f64 (float64x2_t __a) +{ + return vgetq_lane_f64 (__a, 0); +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vget_low_p8 (poly8x16_t __a) +{ + __GET_LOW (p8); +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vget_low_p16 (poly16x8_t __a) +{ + __GET_LOW (p16); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vget_low_s8 (int8x16_t __a) +{ + __GET_LOW (s8); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vget_low_s16 (int16x8_t __a) +{ + __GET_LOW (s16); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vget_low_s32 (int32x4_t __a) +{ + __GET_LOW (s32); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vget_low_s64 (int64x2_t __a) +{ + return vgetq_lane_s64 (__a, 0); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vget_low_u8 (uint8x16_t __a) +{ + __GET_LOW (u8); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vget_low_u16 (uint16x8_t __a) +{ + __GET_LOW (u16); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vget_low_u32 (uint32x4_t __a) +{ + __GET_LOW (u32); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vget_low_u64 (uint64x2_t __a) +{ + return vgetq_lane_u64 (__a, 0); +} + +#undef __GET_LOW + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vcombine_s8 (int8x8_t __a, int8x8_t __b) +{ + return (int8x16_t) __builtin_aarch64_combinev8qi (__a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vcombine_s16 (int16x4_t __a, int16x4_t __b) +{ + return (int16x8_t) __builtin_aarch64_combinev4hi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vcombine_s32 (int32x2_t __a, int32x2_t __b) +{ + return (int32x4_t) __builtin_aarch64_combinev2si (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vcombine_s64 (int64x1_t __a, int64x1_t __b) +{ + return (int64x2_t) __builtin_aarch64_combinedi (__a, __b); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vcombine_f32 (float32x2_t __a, float32x2_t __b) +{ + return (float32x4_t) __builtin_aarch64_combinev2sf (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcombine_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return (uint8x16_t) __builtin_aarch64_combinev8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vcombine_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return (uint16x8_t) __builtin_aarch64_combinev4hi ((int16x4_t) __a, + (int16x4_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcombine_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return (uint32x4_t) __builtin_aarch64_combinev2si ((int32x2_t) __a, + (int32x2_t) __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcombine_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return (uint64x2_t) __builtin_aarch64_combinedi ((int64x1_t) __a, + (int64x1_t) __b); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vcombine_f64 (float64x1_t __a, float64x1_t __b) +{ + return (float64x2_t) __builtin_aarch64_combinedf (__a, __b); +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vcombine_p8 (poly8x8_t __a, poly8x8_t __b) +{ + return (poly8x16_t) __builtin_aarch64_combinev8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vcombine_p16 (poly16x4_t __a, poly16x4_t __b) +{ + return (poly16x8_t) __builtin_aarch64_combinev4hi ((int16x4_t) __a, + (int16x4_t) __b); +} + +/* Start of temporary inline asm implementations. */ + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vaba_s8 (int8x8_t a, int8x8_t b, int8x8_t c) +{ + int8x8_t result; + __asm__ ("saba %0.8b,%2.8b,%3.8b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vaba_s16 (int16x4_t a, int16x4_t b, int16x4_t c) +{ + int16x4_t result; + __asm__ ("saba %0.4h,%2.4h,%3.4h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vaba_s32 (int32x2_t a, int32x2_t b, int32x2_t c) +{ + int32x2_t result; + __asm__ ("saba %0.2s,%2.2s,%3.2s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vaba_u8 (uint8x8_t a, uint8x8_t b, uint8x8_t c) +{ + uint8x8_t result; + __asm__ ("uaba %0.8b,%2.8b,%3.8b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vaba_u16 (uint16x4_t a, uint16x4_t b, uint16x4_t c) +{ + uint16x4_t result; + __asm__ ("uaba %0.4h,%2.4h,%3.4h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vaba_u32 (uint32x2_t a, uint32x2_t b, uint32x2_t c) +{ + uint32x2_t result; + __asm__ ("uaba %0.2s,%2.2s,%3.2s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vabal_high_s8 (int16x8_t a, int8x16_t b, int8x16_t c) +{ + int16x8_t result; + __asm__ ("sabal2 %0.8h,%2.16b,%3.16b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vabal_high_s16 (int32x4_t a, int16x8_t b, int16x8_t c) +{ + int32x4_t result; + __asm__ ("sabal2 %0.4s,%2.8h,%3.8h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vabal_high_s32 (int64x2_t a, int32x4_t b, int32x4_t c) +{ + int64x2_t result; + __asm__ ("sabal2 %0.2d,%2.4s,%3.4s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vabal_high_u8 (uint16x8_t a, uint8x16_t b, uint8x16_t c) +{ + uint16x8_t result; + __asm__ ("uabal2 %0.8h,%2.16b,%3.16b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vabal_high_u16 (uint32x4_t a, uint16x8_t b, uint16x8_t c) +{ + uint32x4_t result; + __asm__ ("uabal2 %0.4s,%2.8h,%3.8h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vabal_high_u32 (uint64x2_t a, uint32x4_t b, uint32x4_t c) +{ + uint64x2_t result; + __asm__ ("uabal2 %0.2d,%2.4s,%3.4s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vabal_s8 (int16x8_t a, int8x8_t b, int8x8_t c) +{ + int16x8_t result; + __asm__ ("sabal %0.8h,%2.8b,%3.8b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vabal_s16 (int32x4_t a, int16x4_t b, int16x4_t c) +{ + int32x4_t result; + __asm__ ("sabal %0.4s,%2.4h,%3.4h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vabal_s32 (int64x2_t a, int32x2_t b, int32x2_t c) +{ + int64x2_t result; + __asm__ ("sabal %0.2d,%2.2s,%3.2s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vabal_u8 (uint16x8_t a, uint8x8_t b, uint8x8_t c) +{ + uint16x8_t result; + __asm__ ("uabal %0.8h,%2.8b,%3.8b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vabal_u16 (uint32x4_t a, uint16x4_t b, uint16x4_t c) +{ + uint32x4_t result; + __asm__ ("uabal %0.4s,%2.4h,%3.4h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vabal_u32 (uint64x2_t a, uint32x2_t b, uint32x2_t c) +{ + uint64x2_t result; + __asm__ ("uabal %0.2d,%2.2s,%3.2s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vabaq_s8 (int8x16_t a, int8x16_t b, int8x16_t c) +{ + int8x16_t result; + __asm__ ("saba %0.16b,%2.16b,%3.16b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vabaq_s16 (int16x8_t a, int16x8_t b, int16x8_t c) +{ + int16x8_t result; + __asm__ ("saba %0.8h,%2.8h,%3.8h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vabaq_s32 (int32x4_t a, int32x4_t b, int32x4_t c) +{ + int32x4_t result; + __asm__ ("saba %0.4s,%2.4s,%3.4s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vabaq_u8 (uint8x16_t a, uint8x16_t b, uint8x16_t c) +{ + uint8x16_t result; + __asm__ ("uaba %0.16b,%2.16b,%3.16b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vabaq_u16 (uint16x8_t a, uint16x8_t b, uint16x8_t c) +{ + uint16x8_t result; + __asm__ ("uaba %0.8h,%2.8h,%3.8h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vabaq_u32 (uint32x4_t a, uint32x4_t b, uint32x4_t c) +{ + uint32x4_t result; + __asm__ ("uaba %0.4s,%2.4s,%3.4s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vabd_f32 (float32x2_t a, float32x2_t b) +{ + float32x2_t result; + __asm__ ("fabd %0.2s, %1.2s, %2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vabd_s8 (int8x8_t a, int8x8_t b) +{ + int8x8_t result; + __asm__ ("sabd %0.8b, %1.8b, %2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vabd_s16 (int16x4_t a, int16x4_t b) +{ + int16x4_t result; + __asm__ ("sabd %0.4h, %1.4h, %2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vabd_s32 (int32x2_t a, int32x2_t b) +{ + int32x2_t result; + __asm__ ("sabd %0.2s, %1.2s, %2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vabd_u8 (uint8x8_t a, uint8x8_t b) +{ + uint8x8_t result; + __asm__ ("uabd %0.8b, %1.8b, %2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vabd_u16 (uint16x4_t a, uint16x4_t b) +{ + uint16x4_t result; + __asm__ ("uabd %0.4h, %1.4h, %2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vabd_u32 (uint32x2_t a, uint32x2_t b) +{ + uint32x2_t result; + __asm__ ("uabd %0.2s, %1.2s, %2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vabdd_f64 (float64_t a, float64_t b) +{ + float64_t result; + __asm__ ("fabd %d0, %d1, %d2" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vabdl_high_s8 (int8x16_t a, int8x16_t b) +{ + int16x8_t result; + __asm__ ("sabdl2 %0.8h,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vabdl_high_s16 (int16x8_t a, int16x8_t b) +{ + int32x4_t result; + __asm__ ("sabdl2 %0.4s,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vabdl_high_s32 (int32x4_t a, int32x4_t b) +{ + int64x2_t result; + __asm__ ("sabdl2 %0.2d,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vabdl_high_u8 (uint8x16_t a, uint8x16_t b) +{ + uint16x8_t result; + __asm__ ("uabdl2 %0.8h,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vabdl_high_u16 (uint16x8_t a, uint16x8_t b) +{ + uint32x4_t result; + __asm__ ("uabdl2 %0.4s,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vabdl_high_u32 (uint32x4_t a, uint32x4_t b) +{ + uint64x2_t result; + __asm__ ("uabdl2 %0.2d,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vabdl_s8 (int8x8_t a, int8x8_t b) +{ + int16x8_t result; + __asm__ ("sabdl %0.8h, %1.8b, %2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vabdl_s16 (int16x4_t a, int16x4_t b) +{ + int32x4_t result; + __asm__ ("sabdl %0.4s, %1.4h, %2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vabdl_s32 (int32x2_t a, int32x2_t b) +{ + int64x2_t result; + __asm__ ("sabdl %0.2d, %1.2s, %2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vabdl_u8 (uint8x8_t a, uint8x8_t b) +{ + uint16x8_t result; + __asm__ ("uabdl %0.8h, %1.8b, %2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vabdl_u16 (uint16x4_t a, uint16x4_t b) +{ + uint32x4_t result; + __asm__ ("uabdl %0.4s, %1.4h, %2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vabdl_u32 (uint32x2_t a, uint32x2_t b) +{ + uint64x2_t result; + __asm__ ("uabdl %0.2d, %1.2s, %2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vabdq_f32 (float32x4_t a, float32x4_t b) +{ + float32x4_t result; + __asm__ ("fabd %0.4s, %1.4s, %2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vabdq_f64 (float64x2_t a, float64x2_t b) +{ + float64x2_t result; + __asm__ ("fabd %0.2d, %1.2d, %2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vabdq_s8 (int8x16_t a, int8x16_t b) +{ + int8x16_t result; + __asm__ ("sabd %0.16b, %1.16b, %2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vabdq_s16 (int16x8_t a, int16x8_t b) +{ + int16x8_t result; + __asm__ ("sabd %0.8h, %1.8h, %2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vabdq_s32 (int32x4_t a, int32x4_t b) +{ + int32x4_t result; + __asm__ ("sabd %0.4s, %1.4s, %2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vabdq_u8 (uint8x16_t a, uint8x16_t b) +{ + uint8x16_t result; + __asm__ ("uabd %0.16b, %1.16b, %2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vabdq_u16 (uint16x8_t a, uint16x8_t b) +{ + uint16x8_t result; + __asm__ ("uabd %0.8h, %1.8h, %2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vabdq_u32 (uint32x4_t a, uint32x4_t b) +{ + uint32x4_t result; + __asm__ ("uabd %0.4s, %1.4s, %2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vabds_f32 (float32_t a, float32_t b) +{ + float32_t result; + __asm__ ("fabd %s0, %s1, %s2" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vaddlv_s8 (int8x8_t a) +{ + int16_t result; + __asm__ ("saddlv %h0,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vaddlv_s16 (int16x4_t a) +{ + int32_t result; + __asm__ ("saddlv %s0,%1.4h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vaddlv_u8 (uint8x8_t a) +{ + uint16_t result; + __asm__ ("uaddlv %h0,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vaddlv_u16 (uint16x4_t a) +{ + uint32_t result; + __asm__ ("uaddlv %s0,%1.4h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vaddlvq_s8 (int8x16_t a) +{ + int16_t result; + __asm__ ("saddlv %h0,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vaddlvq_s16 (int16x8_t a) +{ + int32_t result; + __asm__ ("saddlv %s0,%1.8h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64_t __attribute__ ((__always_inline__)) +vaddlvq_s32 (int32x4_t a) +{ + int64_t result; + __asm__ ("saddlv %d0,%1.4s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vaddlvq_u8 (uint8x16_t a) +{ + uint16_t result; + __asm__ ("uaddlv %h0,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vaddlvq_u16 (uint16x8_t a) +{ + uint32_t result; + __asm__ ("uaddlv %s0,%1.8h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vaddlvq_u32 (uint32x4_t a) +{ + uint64_t result; + __asm__ ("uaddlv %d0,%1.4s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vcls_s8 (int8x8_t a) +{ + int8x8_t result; + __asm__ ("cls %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vcls_s16 (int16x4_t a) +{ + int16x4_t result; + __asm__ ("cls %0.4h,%1.4h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vcls_s32 (int32x2_t a) +{ + int32x2_t result; + __asm__ ("cls %0.2s,%1.2s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vclsq_s8 (int8x16_t a) +{ + int8x16_t result; + __asm__ ("cls %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vclsq_s16 (int16x8_t a) +{ + int16x8_t result; + __asm__ ("cls %0.8h,%1.8h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vclsq_s32 (int32x4_t a) +{ + int32x4_t result; + __asm__ ("cls %0.4s,%1.4s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vcnt_p8 (poly8x8_t a) +{ + poly8x8_t result; + __asm__ ("cnt %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vcnt_s8 (int8x8_t a) +{ + int8x8_t result; + __asm__ ("cnt %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vcnt_u8 (uint8x8_t a) +{ + uint8x8_t result; + __asm__ ("cnt %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vcntq_p8 (poly8x16_t a) +{ + poly8x16_t result; + __asm__ ("cnt %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vcntq_s8 (int8x16_t a) +{ + int8x16_t result; + __asm__ ("cnt %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcntq_u8 (uint8x16_t a) +{ + uint8x16_t result; + __asm__ ("cnt %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +#define vcopyq_lane_f32(a, b, c, d) \ + __extension__ \ + ({ \ + float32x4_t c_ = (c); \ + float32x4_t a_ = (a); \ + float32x4_t result; \ + __asm__ ("ins %0.s[%2], %3.s[%4]" \ + : "=w"(result) \ + : "0"(a_), "i"(b), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcopyq_lane_f64(a, b, c, d) \ + __extension__ \ + ({ \ + float64x2_t c_ = (c); \ + float64x2_t a_ = (a); \ + float64x2_t result; \ + __asm__ ("ins %0.d[%2], %3.d[%4]" \ + : "=w"(result) \ + : "0"(a_), "i"(b), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcopyq_lane_p8(a, b, c, d) \ + __extension__ \ + ({ \ + poly8x16_t c_ = (c); \ + poly8x16_t a_ = (a); \ + poly8x16_t result; \ + __asm__ ("ins %0.b[%2], %3.b[%4]" \ + : "=w"(result) \ + : "0"(a_), "i"(b), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcopyq_lane_p16(a, b, c, d) \ + __extension__ \ + ({ \ + poly16x8_t c_ = (c); \ + poly16x8_t a_ = (a); \ + poly16x8_t result; \ + __asm__ ("ins %0.h[%2], %3.h[%4]" \ + : "=w"(result) \ + : "0"(a_), "i"(b), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcopyq_lane_s8(a, b, c, d) \ + __extension__ \ + ({ \ + int8x16_t c_ = (c); \ + int8x16_t a_ = (a); \ + int8x16_t result; \ + __asm__ ("ins %0.b[%2], %3.b[%4]" \ + : "=w"(result) \ + : "0"(a_), "i"(b), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcopyq_lane_s16(a, b, c, d) \ + __extension__ \ + ({ \ + int16x8_t c_ = (c); \ + int16x8_t a_ = (a); \ + int16x8_t result; \ + __asm__ ("ins %0.h[%2], %3.h[%4]" \ + : "=w"(result) \ + : "0"(a_), "i"(b), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcopyq_lane_s32(a, b, c, d) \ + __extension__ \ + ({ \ + int32x4_t c_ = (c); \ + int32x4_t a_ = (a); \ + int32x4_t result; \ + __asm__ ("ins %0.s[%2], %3.s[%4]" \ + : "=w"(result) \ + : "0"(a_), "i"(b), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcopyq_lane_s64(a, b, c, d) \ + __extension__ \ + ({ \ + int64x2_t c_ = (c); \ + int64x2_t a_ = (a); \ + int64x2_t result; \ + __asm__ ("ins %0.d[%2], %3.d[%4]" \ + : "=w"(result) \ + : "0"(a_), "i"(b), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcopyq_lane_u8(a, b, c, d) \ + __extension__ \ + ({ \ + uint8x16_t c_ = (c); \ + uint8x16_t a_ = (a); \ + uint8x16_t result; \ + __asm__ ("ins %0.b[%2], %3.b[%4]" \ + : "=w"(result) \ + : "0"(a_), "i"(b), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcopyq_lane_u16(a, b, c, d) \ + __extension__ \ + ({ \ + uint16x8_t c_ = (c); \ + uint16x8_t a_ = (a); \ + uint16x8_t result; \ + __asm__ ("ins %0.h[%2], %3.h[%4]" \ + : "=w"(result) \ + : "0"(a_), "i"(b), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcopyq_lane_u32(a, b, c, d) \ + __extension__ \ + ({ \ + uint32x4_t c_ = (c); \ + uint32x4_t a_ = (a); \ + uint32x4_t result; \ + __asm__ ("ins %0.s[%2], %3.s[%4]" \ + : "=w"(result) \ + : "0"(a_), "i"(b), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcopyq_lane_u64(a, b, c, d) \ + __extension__ \ + ({ \ + uint64x2_t c_ = (c); \ + uint64x2_t a_ = (a); \ + uint64x2_t result; \ + __asm__ ("ins %0.d[%2], %3.d[%4]" \ + : "=w"(result) \ + : "0"(a_), "i"(b), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +/* vcvt_f16_f32 not supported */ + +/* vcvt_f32_f16 not supported */ + +/* vcvt_high_f16_f32 not supported */ + +/* vcvt_high_f32_f16 not supported */ + +static float32x2_t vdup_n_f32 (float32_t); + +#define vcvt_n_f32_s32(a, b) \ + __extension__ \ + ({ \ + int32x2_t a_ = (a); \ + float32x2_t result; \ + __asm__ ("scvtf %0.2s, %1.2s, #%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcvt_n_f32_u32(a, b) \ + __extension__ \ + ({ \ + uint32x2_t a_ = (a); \ + float32x2_t result; \ + __asm__ ("ucvtf %0.2s, %1.2s, #%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcvt_n_s32_f32(a, b) \ + __extension__ \ + ({ \ + float32x2_t a_ = (a); \ + int32x2_t result; \ + __asm__ ("fcvtzs %0.2s, %1.2s, #%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcvt_n_u32_f32(a, b) \ + __extension__ \ + ({ \ + float32x2_t a_ = (a); \ + uint32x2_t result; \ + __asm__ ("fcvtzu %0.2s, %1.2s, #%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcvtd_n_f64_s64(a, b) \ + __extension__ \ + ({ \ + int64_t a_ = (a); \ + float64_t result; \ + __asm__ ("scvtf %d0,%d1,%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcvtd_n_f64_u64(a, b) \ + __extension__ \ + ({ \ + uint64_t a_ = (a); \ + float64_t result; \ + __asm__ ("ucvtf %d0,%d1,%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcvtd_n_s64_f64(a, b) \ + __extension__ \ + ({ \ + float64_t a_ = (a); \ + int64_t result; \ + __asm__ ("fcvtzs %d0,%d1,%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcvtd_n_u64_f64(a, b) \ + __extension__ \ + ({ \ + float64_t a_ = (a); \ + uint64_t result; \ + __asm__ ("fcvtzu %d0,%d1,%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcvtq_n_f32_s32(a, b) \ + __extension__ \ + ({ \ + int32x4_t a_ = (a); \ + float32x4_t result; \ + __asm__ ("scvtf %0.4s, %1.4s, #%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcvtq_n_f32_u32(a, b) \ + __extension__ \ + ({ \ + uint32x4_t a_ = (a); \ + float32x4_t result; \ + __asm__ ("ucvtf %0.4s, %1.4s, #%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcvtq_n_f64_s64(a, b) \ + __extension__ \ + ({ \ + int64x2_t a_ = (a); \ + float64x2_t result; \ + __asm__ ("scvtf %0.2d, %1.2d, #%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcvtq_n_f64_u64(a, b) \ + __extension__ \ + ({ \ + uint64x2_t a_ = (a); \ + float64x2_t result; \ + __asm__ ("ucvtf %0.2d, %1.2d, #%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcvtq_n_s32_f32(a, b) \ + __extension__ \ + ({ \ + float32x4_t a_ = (a); \ + int32x4_t result; \ + __asm__ ("fcvtzs %0.4s, %1.4s, #%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcvtq_n_s64_f64(a, b) \ + __extension__ \ + ({ \ + float64x2_t a_ = (a); \ + int64x2_t result; \ + __asm__ ("fcvtzs %0.2d, %1.2d, #%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcvtq_n_u32_f32(a, b) \ + __extension__ \ + ({ \ + float32x4_t a_ = (a); \ + uint32x4_t result; \ + __asm__ ("fcvtzu %0.4s, %1.4s, #%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcvtq_n_u64_f64(a, b) \ + __extension__ \ + ({ \ + float64x2_t a_ = (a); \ + uint64x2_t result; \ + __asm__ ("fcvtzu %0.2d, %1.2d, #%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcvts_n_f32_s32(a, b) \ + __extension__ \ + ({ \ + int32_t a_ = (a); \ + float32_t result; \ + __asm__ ("scvtf %s0,%s1,%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcvts_n_f32_u32(a, b) \ + __extension__ \ + ({ \ + uint32_t a_ = (a); \ + float32_t result; \ + __asm__ ("ucvtf %s0,%s1,%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcvts_n_s32_f32(a, b) \ + __extension__ \ + ({ \ + float32_t a_ = (a); \ + int32_t result; \ + __asm__ ("fcvtzs %s0,%s1,%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vcvts_n_u32_f32(a, b) \ + __extension__ \ + ({ \ + float32_t a_ = (a); \ + uint32_t result; \ + __asm__ ("fcvtzu %s0,%s1,%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vcvtx_f32_f64 (float64x2_t a) +{ + float32x2_t result; + __asm__ ("fcvtxn %0.2s,%1.2d" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vcvtx_high_f32_f64 (float32x2_t a, float64x2_t b) +{ + float32x4_t result; + __asm__ ("fcvtxn2 %0.4s,%1.2d" + : "=w"(result) + : "w" (b), "0"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vcvtxd_f32_f64 (float64_t a) +{ + float32_t result; + __asm__ ("fcvtxn %s0,%d1" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +#define vext_f32(a, b, c) \ + __extension__ \ + ({ \ + float32x2_t b_ = (b); \ + float32x2_t a_ = (a); \ + float32x2_t result; \ + __asm__ ("ext %0.8b, %1.8b, %2.8b, #%3*4" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vext_f64(a, b, c) \ + __extension__ \ + ({ \ + float64x1_t b_ = (b); \ + float64x1_t a_ = (a); \ + float64x1_t result; \ + __asm__ ("ext %0.8b, %1.8b, %2.8b, #%3*8" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vext_p8(a, b, c) \ + __extension__ \ + ({ \ + poly8x8_t b_ = (b); \ + poly8x8_t a_ = (a); \ + poly8x8_t result; \ + __asm__ ("ext %0.8b,%1.8b,%2.8b,%3" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vext_p16(a, b, c) \ + __extension__ \ + ({ \ + poly16x4_t b_ = (b); \ + poly16x4_t a_ = (a); \ + poly16x4_t result; \ + __asm__ ("ext %0.8b, %1.8b, %2.8b, #%3*2" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vext_s8(a, b, c) \ + __extension__ \ + ({ \ + int8x8_t b_ = (b); \ + int8x8_t a_ = (a); \ + int8x8_t result; \ + __asm__ ("ext %0.8b,%1.8b,%2.8b,%3" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vext_s16(a, b, c) \ + __extension__ \ + ({ \ + int16x4_t b_ = (b); \ + int16x4_t a_ = (a); \ + int16x4_t result; \ + __asm__ ("ext %0.8b, %1.8b, %2.8b, #%3*2" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vext_s32(a, b, c) \ + __extension__ \ + ({ \ + int32x2_t b_ = (b); \ + int32x2_t a_ = (a); \ + int32x2_t result; \ + __asm__ ("ext %0.8b, %1.8b, %2.8b, #%3*4" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vext_s64(a, b, c) \ + __extension__ \ + ({ \ + int64x1_t b_ = (b); \ + int64x1_t a_ = (a); \ + int64x1_t result; \ + __asm__ ("ext %0.8b, %1.8b, %2.8b, #%3*8" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vext_u8(a, b, c) \ + __extension__ \ + ({ \ + uint8x8_t b_ = (b); \ + uint8x8_t a_ = (a); \ + uint8x8_t result; \ + __asm__ ("ext %0.8b,%1.8b,%2.8b,%3" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vext_u16(a, b, c) \ + __extension__ \ + ({ \ + uint16x4_t b_ = (b); \ + uint16x4_t a_ = (a); \ + uint16x4_t result; \ + __asm__ ("ext %0.8b, %1.8b, %2.8b, #%3*2" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vext_u32(a, b, c) \ + __extension__ \ + ({ \ + uint32x2_t b_ = (b); \ + uint32x2_t a_ = (a); \ + uint32x2_t result; \ + __asm__ ("ext %0.8b, %1.8b, %2.8b, #%3*4" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vext_u64(a, b, c) \ + __extension__ \ + ({ \ + uint64x1_t b_ = (b); \ + uint64x1_t a_ = (a); \ + uint64x1_t result; \ + __asm__ ("ext %0.8b, %1.8b, %2.8b, #%3*8" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vextq_f32(a, b, c) \ + __extension__ \ + ({ \ + float32x4_t b_ = (b); \ + float32x4_t a_ = (a); \ + float32x4_t result; \ + __asm__ ("ext %0.16b, %1.16b, %2.16b, #%3*4" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vextq_f64(a, b, c) \ + __extension__ \ + ({ \ + float64x2_t b_ = (b); \ + float64x2_t a_ = (a); \ + float64x2_t result; \ + __asm__ ("ext %0.16b, %1.16b, %2.16b, #%3*8" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vextq_p8(a, b, c) \ + __extension__ \ + ({ \ + poly8x16_t b_ = (b); \ + poly8x16_t a_ = (a); \ + poly8x16_t result; \ + __asm__ ("ext %0.16b, %1.16b, %2.16b, #%3" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vextq_p16(a, b, c) \ + __extension__ \ + ({ \ + poly16x8_t b_ = (b); \ + poly16x8_t a_ = (a); \ + poly16x8_t result; \ + __asm__ ("ext %0.16b, %1.16b, %2.16b, #%3*2" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vextq_s8(a, b, c) \ + __extension__ \ + ({ \ + int8x16_t b_ = (b); \ + int8x16_t a_ = (a); \ + int8x16_t result; \ + __asm__ ("ext %0.16b, %1.16b, %2.16b, #%3" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vextq_s16(a, b, c) \ + __extension__ \ + ({ \ + int16x8_t b_ = (b); \ + int16x8_t a_ = (a); \ + int16x8_t result; \ + __asm__ ("ext %0.16b, %1.16b, %2.16b, #%3*2" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vextq_s32(a, b, c) \ + __extension__ \ + ({ \ + int32x4_t b_ = (b); \ + int32x4_t a_ = (a); \ + int32x4_t result; \ + __asm__ ("ext %0.16b, %1.16b, %2.16b, #%3*4" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vextq_s64(a, b, c) \ + __extension__ \ + ({ \ + int64x2_t b_ = (b); \ + int64x2_t a_ = (a); \ + int64x2_t result; \ + __asm__ ("ext %0.16b, %1.16b, %2.16b, #%3*8" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vextq_u8(a, b, c) \ + __extension__ \ + ({ \ + uint8x16_t b_ = (b); \ + uint8x16_t a_ = (a); \ + uint8x16_t result; \ + __asm__ ("ext %0.16b, %1.16b, %2.16b, #%3" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vextq_u16(a, b, c) \ + __extension__ \ + ({ \ + uint16x8_t b_ = (b); \ + uint16x8_t a_ = (a); \ + uint16x8_t result; \ + __asm__ ("ext %0.16b, %1.16b, %2.16b, #%3*2" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vextq_u32(a, b, c) \ + __extension__ \ + ({ \ + uint32x4_t b_ = (b); \ + uint32x4_t a_ = (a); \ + uint32x4_t result; \ + __asm__ ("ext %0.16b, %1.16b, %2.16b, #%3*4" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vextq_u64(a, b, c) \ + __extension__ \ + ({ \ + uint64x2_t b_ = (b); \ + uint64x2_t a_ = (a); \ + uint64x2_t result; \ + __asm__ ("ext %0.16b, %1.16b, %2.16b, #%3*8" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vfma_f32 (float32x2_t a, float32x2_t b, float32x2_t c) +{ + float32x2_t result; + __asm__ ("fmla %0.2s,%2.2s,%3.2s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vfmaq_f32 (float32x4_t a, float32x4_t b, float32x4_t c) +{ + float32x4_t result; + __asm__ ("fmla %0.4s,%2.4s,%3.4s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vfmaq_f64 (float64x2_t a, float64x2_t b, float64x2_t c) +{ + float64x2_t result; + __asm__ ("fmla %0.2d,%2.2d,%3.2d" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vfma_n_f32 (float32x2_t a, float32x2_t b, float32_t c) +{ + float32x2_t result; + __asm__ ("fmla %0.2s, %2.2s, %3.s[0]" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vfmaq_n_f32 (float32x4_t a, float32x4_t b, float32_t c) +{ + float32x4_t result; + __asm__ ("fmla %0.4s, %2.4s, %3.s[0]" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vfmaq_n_f64 (float64x2_t a, float64x2_t b, float64_t c) +{ + float64x2_t result; + __asm__ ("fmla %0.2d, %2.2d, %3.d[0]" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vfms_f32 (float32x2_t a, float32x2_t b, float32x2_t c) +{ + float32x2_t result; + __asm__ ("fmls %0.2s,%2.2s,%3.2s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vfmsq_f32 (float32x4_t a, float32x4_t b, float32x4_t c) +{ + float32x4_t result; + __asm__ ("fmls %0.4s,%2.4s,%3.4s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vfmsq_f64 (float64x2_t a, float64x2_t b, float64x2_t c) +{ + float64x2_t result; + __asm__ ("fmls %0.2d,%2.2d,%3.2d" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vget_high_f32 (float32x4_t a) +{ + float32x2_t result; + __asm__ ("ins %0.d[0], %1.d[1]" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x1_t __attribute__ ((__always_inline__)) +vget_high_f64 (float64x2_t a) +{ + float64x1_t result; + __asm__ ("ins %0.d[0], %1.d[1]" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vget_high_p8 (poly8x16_t a) +{ + poly8x8_t result; + __asm__ ("ins %0.d[0], %1.d[1]" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vget_high_p16 (poly16x8_t a) +{ + poly16x4_t result; + __asm__ ("ins %0.d[0], %1.d[1]" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vget_high_s8 (int8x16_t a) +{ + int8x8_t result; + __asm__ ("ins %0.d[0], %1.d[1]" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vget_high_s16 (int16x8_t a) +{ + int16x4_t result; + __asm__ ("ins %0.d[0], %1.d[1]" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vget_high_s32 (int32x4_t a) +{ + int32x2_t result; + __asm__ ("ins %0.d[0], %1.d[1]" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vget_high_s64 (int64x2_t a) +{ + int64x1_t result; + __asm__ ("ins %0.d[0], %1.d[1]" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vget_high_u8 (uint8x16_t a) +{ + uint8x8_t result; + __asm__ ("ins %0.d[0], %1.d[1]" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vget_high_u16 (uint16x8_t a) +{ + uint16x4_t result; + __asm__ ("ins %0.d[0], %1.d[1]" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vget_high_u32 (uint32x4_t a) +{ + uint32x2_t result; + __asm__ ("ins %0.d[0], %1.d[1]" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vget_high_u64 (uint64x2_t a) +{ + uint64x1_t result; + __asm__ ("ins %0.d[0], %1.d[1]" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vhsub_s8 (int8x8_t a, int8x8_t b) +{ + int8x8_t result; + __asm__ ("shsub %0.8b, %1.8b, %2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vhsub_s16 (int16x4_t a, int16x4_t b) +{ + int16x4_t result; + __asm__ ("shsub %0.4h, %1.4h, %2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vhsub_s32 (int32x2_t a, int32x2_t b) +{ + int32x2_t result; + __asm__ ("shsub %0.2s, %1.2s, %2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vhsub_u8 (uint8x8_t a, uint8x8_t b) +{ + uint8x8_t result; + __asm__ ("uhsub %0.8b, %1.8b, %2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vhsub_u16 (uint16x4_t a, uint16x4_t b) +{ + uint16x4_t result; + __asm__ ("uhsub %0.4h, %1.4h, %2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vhsub_u32 (uint32x2_t a, uint32x2_t b) +{ + uint32x2_t result; + __asm__ ("uhsub %0.2s, %1.2s, %2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vhsubq_s8 (int8x16_t a, int8x16_t b) +{ + int8x16_t result; + __asm__ ("shsub %0.16b, %1.16b, %2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vhsubq_s16 (int16x8_t a, int16x8_t b) +{ + int16x8_t result; + __asm__ ("shsub %0.8h, %1.8h, %2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vhsubq_s32 (int32x4_t a, int32x4_t b) +{ + int32x4_t result; + __asm__ ("shsub %0.4s, %1.4s, %2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vhsubq_u8 (uint8x16_t a, uint8x16_t b) +{ + uint8x16_t result; + __asm__ ("uhsub %0.16b, %1.16b, %2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vhsubq_u16 (uint16x8_t a, uint16x8_t b) +{ + uint16x8_t result; + __asm__ ("uhsub %0.8h, %1.8h, %2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vhsubq_u32 (uint32x4_t a, uint32x4_t b) +{ + uint32x4_t result; + __asm__ ("uhsub %0.4s, %1.4s, %2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vld1_dup_f32 (const float32_t * a) +{ + float32x2_t result; + __asm__ ("ld1r {%0.2s}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x1_t __attribute__ ((__always_inline__)) +vld1_dup_f64 (const float64_t * a) +{ + float64x1_t result; + __asm__ ("ld1r {%0.1d}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vld1_dup_p8 (const poly8_t * a) +{ + poly8x8_t result; + __asm__ ("ld1r {%0.8b}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vld1_dup_p16 (const poly16_t * a) +{ + poly16x4_t result; + __asm__ ("ld1r {%0.4h}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vld1_dup_s8 (const int8_t * a) +{ + int8x8_t result; + __asm__ ("ld1r {%0.8b}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vld1_dup_s16 (const int16_t * a) +{ + int16x4_t result; + __asm__ ("ld1r {%0.4h}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vld1_dup_s32 (const int32_t * a) +{ + int32x2_t result; + __asm__ ("ld1r {%0.2s}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vld1_dup_s64 (const int64_t * a) +{ + int64x1_t result; + __asm__ ("ld1r {%0.1d}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vld1_dup_u8 (const uint8_t * a) +{ + uint8x8_t result; + __asm__ ("ld1r {%0.8b}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vld1_dup_u16 (const uint16_t * a) +{ + uint16x4_t result; + __asm__ ("ld1r {%0.4h}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vld1_dup_u32 (const uint32_t * a) +{ + uint32x2_t result; + __asm__ ("ld1r {%0.2s}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vld1_dup_u64 (const uint64_t * a) +{ + uint64x1_t result; + __asm__ ("ld1r {%0.1d}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +#define vld1_lane_f32(a, b, c) \ + __extension__ \ + ({ \ + float32x2_t b_ = (b); \ + const float32_t * a_ = (a); \ + float32x2_t result; \ + __asm__ ("ld1 {%0.s}[%1], %2" \ + : "=w"(result) \ + : "i" (c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1_lane_f64(a, b, c) \ + __extension__ \ + ({ \ + float64x1_t b_ = (b); \ + const float64_t * a_ = (a); \ + float64x1_t result; \ + __asm__ ("ld1 {%0.d}[%1], %2" \ + : "=w"(result) \ + : "i" (c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1_lane_p8(a, b, c) \ + __extension__ \ + ({ \ + poly8x8_t b_ = (b); \ + const poly8_t * a_ = (a); \ + poly8x8_t result; \ + __asm__ ("ld1 {%0.b}[%1], %2" \ + : "=w"(result) \ + : "i" (c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1_lane_p16(a, b, c) \ + __extension__ \ + ({ \ + poly16x4_t b_ = (b); \ + const poly16_t * a_ = (a); \ + poly16x4_t result; \ + __asm__ ("ld1 {%0.h}[%1], %2" \ + : "=w"(result) \ + : "i" (c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1_lane_s8(a, b, c) \ + __extension__ \ + ({ \ + int8x8_t b_ = (b); \ + const int8_t * a_ = (a); \ + int8x8_t result; \ + __asm__ ("ld1 {%0.b}[%1], %2" \ + : "=w"(result) \ + : "i" (c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1_lane_s16(a, b, c) \ + __extension__ \ + ({ \ + int16x4_t b_ = (b); \ + const int16_t * a_ = (a); \ + int16x4_t result; \ + __asm__ ("ld1 {%0.h}[%1], %2" \ + : "=w"(result) \ + : "i" (c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1_lane_s32(a, b, c) \ + __extension__ \ + ({ \ + int32x2_t b_ = (b); \ + const int32_t * a_ = (a); \ + int32x2_t result; \ + __asm__ ("ld1 {%0.s}[%1], %2" \ + : "=w"(result) \ + : "i" (c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1_lane_s64(a, b, c) \ + __extension__ \ + ({ \ + int64x1_t b_ = (b); \ + const int64_t * a_ = (a); \ + int64x1_t result; \ + __asm__ ("ld1 {%0.d}[%1], %2" \ + : "=w"(result) \ + : "i" (c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1_lane_u8(a, b, c) \ + __extension__ \ + ({ \ + uint8x8_t b_ = (b); \ + const uint8_t * a_ = (a); \ + uint8x8_t result; \ + __asm__ ("ld1 {%0.b}[%1], %2" \ + : "=w"(result) \ + : "i" (c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1_lane_u16(a, b, c) \ + __extension__ \ + ({ \ + uint16x4_t b_ = (b); \ + const uint16_t * a_ = (a); \ + uint16x4_t result; \ + __asm__ ("ld1 {%0.h}[%1], %2" \ + : "=w"(result) \ + : "i" (c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1_lane_u32(a, b, c) \ + __extension__ \ + ({ \ + uint32x2_t b_ = (b); \ + const uint32_t * a_ = (a); \ + uint32x2_t result; \ + __asm__ ("ld1 {%0.s}[%1], %2" \ + : "=w"(result) \ + : "i" (c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1_lane_u64(a, b, c) \ + __extension__ \ + ({ \ + uint64x1_t b_ = (b); \ + const uint64_t * a_ = (a); \ + uint64x1_t result; \ + __asm__ ("ld1 {%0.d}[%1], %2" \ + : "=w"(result) \ + : "i" (c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vld1q_dup_f32 (const float32_t * a) +{ + float32x4_t result; + __asm__ ("ld1r {%0.4s}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vld1q_dup_f64 (const float64_t * a) +{ + float64x2_t result; + __asm__ ("ld1r {%0.2d}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vld1q_dup_p8 (const poly8_t * a) +{ + poly8x16_t result; + __asm__ ("ld1r {%0.16b}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vld1q_dup_p16 (const poly16_t * a) +{ + poly16x8_t result; + __asm__ ("ld1r {%0.8h}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vld1q_dup_s8 (const int8_t * a) +{ + int8x16_t result; + __asm__ ("ld1r {%0.16b}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vld1q_dup_s16 (const int16_t * a) +{ + int16x8_t result; + __asm__ ("ld1r {%0.8h}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vld1q_dup_s32 (const int32_t * a) +{ + int32x4_t result; + __asm__ ("ld1r {%0.4s}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vld1q_dup_s64 (const int64_t * a) +{ + int64x2_t result; + __asm__ ("ld1r {%0.2d}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vld1q_dup_u8 (const uint8_t * a) +{ + uint8x16_t result; + __asm__ ("ld1r {%0.16b}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vld1q_dup_u16 (const uint16_t * a) +{ + uint16x8_t result; + __asm__ ("ld1r {%0.8h}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vld1q_dup_u32 (const uint32_t * a) +{ + uint32x4_t result; + __asm__ ("ld1r {%0.4s}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vld1q_dup_u64 (const uint64_t * a) +{ + uint64x2_t result; + __asm__ ("ld1r {%0.2d}, %1" + : "=w"(result) + : "Utv"(*a) + : /* No clobbers */); + return result; +} + +#define vld1q_lane_f32(a, b, c) \ + __extension__ \ + ({ \ + float32x4_t b_ = (b); \ + const float32_t * a_ = (a); \ + float32x4_t result; \ + __asm__ ("ld1 {%0.s}[%1], %2" \ + : "=w"(result) \ + : "i"(c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1q_lane_f64(a, b, c) \ + __extension__ \ + ({ \ + float64x2_t b_ = (b); \ + const float64_t * a_ = (a); \ + float64x2_t result; \ + __asm__ ("ld1 {%0.d}[%1], %2" \ + : "=w"(result) \ + : "i"(c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1q_lane_p8(a, b, c) \ + __extension__ \ + ({ \ + poly8x16_t b_ = (b); \ + const poly8_t * a_ = (a); \ + poly8x16_t result; \ + __asm__ ("ld1 {%0.b}[%1], %2" \ + : "=w"(result) \ + : "i"(c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1q_lane_p16(a, b, c) \ + __extension__ \ + ({ \ + poly16x8_t b_ = (b); \ + const poly16_t * a_ = (a); \ + poly16x8_t result; \ + __asm__ ("ld1 {%0.h}[%1], %2" \ + : "=w"(result) \ + : "i"(c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1q_lane_s8(a, b, c) \ + __extension__ \ + ({ \ + int8x16_t b_ = (b); \ + const int8_t * a_ = (a); \ + int8x16_t result; \ + __asm__ ("ld1 {%0.b}[%1], %2" \ + : "=w"(result) \ + : "i"(c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1q_lane_s16(a, b, c) \ + __extension__ \ + ({ \ + int16x8_t b_ = (b); \ + const int16_t * a_ = (a); \ + int16x8_t result; \ + __asm__ ("ld1 {%0.h}[%1], %2" \ + : "=w"(result) \ + : "i"(c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1q_lane_s32(a, b, c) \ + __extension__ \ + ({ \ + int32x4_t b_ = (b); \ + const int32_t * a_ = (a); \ + int32x4_t result; \ + __asm__ ("ld1 {%0.s}[%1], %2" \ + : "=w"(result) \ + : "i"(c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1q_lane_s64(a, b, c) \ + __extension__ \ + ({ \ + int64x2_t b_ = (b); \ + const int64_t * a_ = (a); \ + int64x2_t result; \ + __asm__ ("ld1 {%0.d}[%1], %2" \ + : "=w"(result) \ + : "i"(c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1q_lane_u8(a, b, c) \ + __extension__ \ + ({ \ + uint8x16_t b_ = (b); \ + const uint8_t * a_ = (a); \ + uint8x16_t result; \ + __asm__ ("ld1 {%0.b}[%1], %2" \ + : "=w"(result) \ + : "i"(c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1q_lane_u16(a, b, c) \ + __extension__ \ + ({ \ + uint16x8_t b_ = (b); \ + const uint16_t * a_ = (a); \ + uint16x8_t result; \ + __asm__ ("ld1 {%0.h}[%1], %2" \ + : "=w"(result) \ + : "i"(c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1q_lane_u32(a, b, c) \ + __extension__ \ + ({ \ + uint32x4_t b_ = (b); \ + const uint32_t * a_ = (a); \ + uint32x4_t result; \ + __asm__ ("ld1 {%0.s}[%1], %2" \ + : "=w"(result) \ + : "i"(c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +#define vld1q_lane_u64(a, b, c) \ + __extension__ \ + ({ \ + uint64x2_t b_ = (b); \ + const uint64_t * a_ = (a); \ + uint64x2_t result; \ + __asm__ ("ld1 {%0.d}[%1], %2" \ + : "=w"(result) \ + : "i"(c), "Utv"(*a_), "0"(b_) \ + : /* No clobbers */); \ + result; \ + }) + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vmla_n_f32 (float32x2_t a, float32x2_t b, float32_t c) +{ + float32x2_t result; + float32x2_t t1; + __asm__ ("fmul %1.2s, %3.2s, %4.s[0]; fadd %0.2s, %0.2s, %1.2s" + : "=w"(result), "=w"(t1) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vmla_n_s16 (int16x4_t a, int16x4_t b, int16_t c) +{ + int16x4_t result; + __asm__ ("mla %0.4h,%2.4h,%3.h[0]" + : "=w"(result) + : "0"(a), "w"(b), "x"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vmla_n_s32 (int32x2_t a, int32x2_t b, int32_t c) +{ + int32x2_t result; + __asm__ ("mla %0.2s,%2.2s,%3.s[0]" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vmla_n_u16 (uint16x4_t a, uint16x4_t b, uint16_t c) +{ + uint16x4_t result; + __asm__ ("mla %0.4h,%2.4h,%3.h[0]" + : "=w"(result) + : "0"(a), "w"(b), "x"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vmla_n_u32 (uint32x2_t a, uint32x2_t b, uint32_t c) +{ + uint32x2_t result; + __asm__ ("mla %0.2s,%2.2s,%3.s[0]" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vmla_s8 (int8x8_t a, int8x8_t b, int8x8_t c) +{ + int8x8_t result; + __asm__ ("mla %0.8b, %2.8b, %3.8b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vmla_s16 (int16x4_t a, int16x4_t b, int16x4_t c) +{ + int16x4_t result; + __asm__ ("mla %0.4h, %2.4h, %3.4h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vmla_s32 (int32x2_t a, int32x2_t b, int32x2_t c) +{ + int32x2_t result; + __asm__ ("mla %0.2s, %2.2s, %3.2s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vmla_u8 (uint8x8_t a, uint8x8_t b, uint8x8_t c) +{ + uint8x8_t result; + __asm__ ("mla %0.8b, %2.8b, %3.8b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vmla_u16 (uint16x4_t a, uint16x4_t b, uint16x4_t c) +{ + uint16x4_t result; + __asm__ ("mla %0.4h, %2.4h, %3.4h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vmla_u32 (uint32x2_t a, uint32x2_t b, uint32x2_t c) +{ + uint32x2_t result; + __asm__ ("mla %0.2s, %2.2s, %3.2s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +#define vmlal_high_lane_s16(a, b, c, d) \ + __extension__ \ + ({ \ + int16x8_t c_ = (c); \ + int16x8_t b_ = (b); \ + int32x4_t a_ = (a); \ + int32x4_t result; \ + __asm__ ("smlal2 %0.4s, %2.8h, %3.h[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "x"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlal_high_lane_s32(a, b, c, d) \ + __extension__ \ + ({ \ + int32x4_t c_ = (c); \ + int32x4_t b_ = (b); \ + int64x2_t a_ = (a); \ + int64x2_t result; \ + __asm__ ("smlal2 %0.2d, %2.4s, %3.s[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlal_high_lane_u16(a, b, c, d) \ + __extension__ \ + ({ \ + uint16x8_t c_ = (c); \ + uint16x8_t b_ = (b); \ + uint32x4_t a_ = (a); \ + uint32x4_t result; \ + __asm__ ("umlal2 %0.4s, %2.8h, %3.h[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "x"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlal_high_lane_u32(a, b, c, d) \ + __extension__ \ + ({ \ + uint32x4_t c_ = (c); \ + uint32x4_t b_ = (b); \ + uint64x2_t a_ = (a); \ + uint64x2_t result; \ + __asm__ ("umlal2 %0.2d, %2.4s, %3.s[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlal_high_laneq_s16(a, b, c, d) \ + __extension__ \ + ({ \ + int16x8_t c_ = (c); \ + int16x8_t b_ = (b); \ + int32x4_t a_ = (a); \ + int32x4_t result; \ + __asm__ ("smlal2 %0.4s, %2.8h, %3.h[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "x"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlal_high_laneq_s32(a, b, c, d) \ + __extension__ \ + ({ \ + int32x4_t c_ = (c); \ + int32x4_t b_ = (b); \ + int64x2_t a_ = (a); \ + int64x2_t result; \ + __asm__ ("smlal2 %0.2d, %2.4s, %3.s[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlal_high_laneq_u16(a, b, c, d) \ + __extension__ \ + ({ \ + uint16x8_t c_ = (c); \ + uint16x8_t b_ = (b); \ + uint32x4_t a_ = (a); \ + uint32x4_t result; \ + __asm__ ("umlal2 %0.4s, %2.8h, %3.h[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "x"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlal_high_laneq_u32(a, b, c, d) \ + __extension__ \ + ({ \ + uint32x4_t c_ = (c); \ + uint32x4_t b_ = (b); \ + uint64x2_t a_ = (a); \ + uint64x2_t result; \ + __asm__ ("umlal2 %0.2d, %2.4s, %3.s[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmlal_high_n_s16 (int32x4_t a, int16x8_t b, int16_t c) +{ + int32x4_t result; + __asm__ ("smlal2 %0.4s,%2.8h,%3.h[0]" + : "=w"(result) + : "0"(a), "w"(b), "x"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vmlal_high_n_s32 (int64x2_t a, int32x4_t b, int32_t c) +{ + int64x2_t result; + __asm__ ("smlal2 %0.2d,%2.4s,%3.s[0]" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmlal_high_n_u16 (uint32x4_t a, uint16x8_t b, uint16_t c) +{ + uint32x4_t result; + __asm__ ("umlal2 %0.4s,%2.8h,%3.h[0]" + : "=w"(result) + : "0"(a), "w"(b), "x"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vmlal_high_n_u32 (uint64x2_t a, uint32x4_t b, uint32_t c) +{ + uint64x2_t result; + __asm__ ("umlal2 %0.2d,%2.4s,%3.s[0]" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmlal_high_s8 (int16x8_t a, int8x16_t b, int8x16_t c) +{ + int16x8_t result; + __asm__ ("smlal2 %0.8h,%2.16b,%3.16b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmlal_high_s16 (int32x4_t a, int16x8_t b, int16x8_t c) +{ + int32x4_t result; + __asm__ ("smlal2 %0.4s,%2.8h,%3.8h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vmlal_high_s32 (int64x2_t a, int32x4_t b, int32x4_t c) +{ + int64x2_t result; + __asm__ ("smlal2 %0.2d,%2.4s,%3.4s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmlal_high_u8 (uint16x8_t a, uint8x16_t b, uint8x16_t c) +{ + uint16x8_t result; + __asm__ ("umlal2 %0.8h,%2.16b,%3.16b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmlal_high_u16 (uint32x4_t a, uint16x8_t b, uint16x8_t c) +{ + uint32x4_t result; + __asm__ ("umlal2 %0.4s,%2.8h,%3.8h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vmlal_high_u32 (uint64x2_t a, uint32x4_t b, uint32x4_t c) +{ + uint64x2_t result; + __asm__ ("umlal2 %0.2d,%2.4s,%3.4s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +#define vmlal_lane_s16(a, b, c, d) \ + __extension__ \ + ({ \ + int16x4_t c_ = (c); \ + int16x4_t b_ = (b); \ + int32x4_t a_ = (a); \ + int32x4_t result; \ + __asm__ ("smlal %0.4s,%2.4h,%3.h[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "x"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlal_lane_s32(a, b, c, d) \ + __extension__ \ + ({ \ + int32x2_t c_ = (c); \ + int32x2_t b_ = (b); \ + int64x2_t a_ = (a); \ + int64x2_t result; \ + __asm__ ("smlal %0.2d,%2.2s,%3.s[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlal_lane_u16(a, b, c, d) \ + __extension__ \ + ({ \ + uint16x4_t c_ = (c); \ + uint16x4_t b_ = (b); \ + uint32x4_t a_ = (a); \ + uint32x4_t result; \ + __asm__ ("umlal %0.4s,%2.4h,%3.h[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "x"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlal_lane_u32(a, b, c, d) \ + __extension__ \ + ({ \ + uint32x2_t c_ = (c); \ + uint32x2_t b_ = (b); \ + uint64x2_t a_ = (a); \ + uint64x2_t result; \ + __asm__ ("umlal %0.2d, %2.2s, %3.s[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlal_laneq_s16(a, b, c, d) \ + __extension__ \ + ({ \ + int16x8_t c_ = (c); \ + int16x4_t b_ = (b); \ + int32x4_t a_ = (a); \ + int32x4_t result; \ + __asm__ ("smlal %0.4s, %2.4h, %3.h[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "x"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlal_laneq_s32(a, b, c, d) \ + __extension__ \ + ({ \ + int32x4_t c_ = (c); \ + int32x2_t b_ = (b); \ + int64x2_t a_ = (a); \ + int64x2_t result; \ + __asm__ ("smlal %0.2d, %2.2s, %3.s[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlal_laneq_u16(a, b, c, d) \ + __extension__ \ + ({ \ + uint16x8_t c_ = (c); \ + uint16x4_t b_ = (b); \ + uint32x4_t a_ = (a); \ + uint32x4_t result; \ + __asm__ ("umlal %0.4s, %2.4h, %3.h[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "x"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlal_laneq_u32(a, b, c, d) \ + __extension__ \ + ({ \ + uint32x4_t c_ = (c); \ + uint32x2_t b_ = (b); \ + uint64x2_t a_ = (a); \ + uint64x2_t result; \ + __asm__ ("umlal %0.2d, %2.2s, %3.s[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmlal_n_s16 (int32x4_t a, int16x4_t b, int16_t c) +{ + int32x4_t result; + __asm__ ("smlal %0.4s,%2.4h,%3.h[0]" + : "=w"(result) + : "0"(a), "w"(b), "x"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vmlal_n_s32 (int64x2_t a, int32x2_t b, int32_t c) +{ + int64x2_t result; + __asm__ ("smlal %0.2d,%2.2s,%3.s[0]" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmlal_n_u16 (uint32x4_t a, uint16x4_t b, uint16_t c) +{ + uint32x4_t result; + __asm__ ("umlal %0.4s,%2.4h,%3.h[0]" + : "=w"(result) + : "0"(a), "w"(b), "x"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vmlal_n_u32 (uint64x2_t a, uint32x2_t b, uint32_t c) +{ + uint64x2_t result; + __asm__ ("umlal %0.2d,%2.2s,%3.s[0]" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmlal_s8 (int16x8_t a, int8x8_t b, int8x8_t c) +{ + int16x8_t result; + __asm__ ("smlal %0.8h,%2.8b,%3.8b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmlal_s16 (int32x4_t a, int16x4_t b, int16x4_t c) +{ + int32x4_t result; + __asm__ ("smlal %0.4s,%2.4h,%3.4h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vmlal_s32 (int64x2_t a, int32x2_t b, int32x2_t c) +{ + int64x2_t result; + __asm__ ("smlal %0.2d,%2.2s,%3.2s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmlal_u8 (uint16x8_t a, uint8x8_t b, uint8x8_t c) +{ + uint16x8_t result; + __asm__ ("umlal %0.8h,%2.8b,%3.8b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmlal_u16 (uint32x4_t a, uint16x4_t b, uint16x4_t c) +{ + uint32x4_t result; + __asm__ ("umlal %0.4s,%2.4h,%3.4h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vmlal_u32 (uint64x2_t a, uint32x2_t b, uint32x2_t c) +{ + uint64x2_t result; + __asm__ ("umlal %0.2d,%2.2s,%3.2s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vmlaq_n_f32 (float32x4_t a, float32x4_t b, float32_t c) +{ + float32x4_t result; + float32x4_t t1; + __asm__ ("fmul %1.4s, %3.4s, %4.s[0]; fadd %0.4s, %0.4s, %1.4s" + : "=w"(result), "=w"(t1) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vmlaq_n_f64 (float64x2_t a, float64x2_t b, float64_t c) +{ + float64x2_t result; + float64x2_t t1; + __asm__ ("fmul %1.2d, %3.2d, %4.d[0]; fadd %0.2d, %0.2d, %1.2d" + : "=w"(result), "=w"(t1) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmlaq_n_s16 (int16x8_t a, int16x8_t b, int16_t c) +{ + int16x8_t result; + __asm__ ("mla %0.8h,%2.8h,%3.h[0]" + : "=w"(result) + : "0"(a), "w"(b), "x"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmlaq_n_s32 (int32x4_t a, int32x4_t b, int32_t c) +{ + int32x4_t result; + __asm__ ("mla %0.4s,%2.4s,%3.s[0]" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmlaq_n_u16 (uint16x8_t a, uint16x8_t b, uint16_t c) +{ + uint16x8_t result; + __asm__ ("mla %0.8h,%2.8h,%3.h[0]" + : "=w"(result) + : "0"(a), "w"(b), "x"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmlaq_n_u32 (uint32x4_t a, uint32x4_t b, uint32_t c) +{ + uint32x4_t result; + __asm__ ("mla %0.4s,%2.4s,%3.s[0]" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vmlaq_s8 (int8x16_t a, int8x16_t b, int8x16_t c) +{ + int8x16_t result; + __asm__ ("mla %0.16b, %2.16b, %3.16b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmlaq_s16 (int16x8_t a, int16x8_t b, int16x8_t c) +{ + int16x8_t result; + __asm__ ("mla %0.8h, %2.8h, %3.8h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmlaq_s32 (int32x4_t a, int32x4_t b, int32x4_t c) +{ + int32x4_t result; + __asm__ ("mla %0.4s, %2.4s, %3.4s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vmlaq_u8 (uint8x16_t a, uint8x16_t b, uint8x16_t c) +{ + uint8x16_t result; + __asm__ ("mla %0.16b, %2.16b, %3.16b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmlaq_u16 (uint16x8_t a, uint16x8_t b, uint16x8_t c) +{ + uint16x8_t result; + __asm__ ("mla %0.8h, %2.8h, %3.8h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmlaq_u32 (uint32x4_t a, uint32x4_t b, uint32x4_t c) +{ + uint32x4_t result; + __asm__ ("mla %0.4s, %2.4s, %3.4s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vmls_n_f32 (float32x2_t a, float32x2_t b, float32_t c) +{ + float32x2_t result; + float32x2_t t1; + __asm__ ("fmul %1.2s, %3.2s, %4.s[0]; fsub %0.2s, %0.2s, %1.2s" + : "=w"(result), "=w"(t1) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vmls_n_s16 (int16x4_t a, int16x4_t b, int16_t c) +{ + int16x4_t result; + __asm__ ("mls %0.4h, %2.4h, %3.h[0]" + : "=w"(result) + : "0"(a), "w"(b), "x"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vmls_n_s32 (int32x2_t a, int32x2_t b, int32_t c) +{ + int32x2_t result; + __asm__ ("mls %0.2s, %2.2s, %3.s[0]" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vmls_n_u16 (uint16x4_t a, uint16x4_t b, uint16_t c) +{ + uint16x4_t result; + __asm__ ("mls %0.4h, %2.4h, %3.h[0]" + : "=w"(result) + : "0"(a), "w"(b), "x"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vmls_n_u32 (uint32x2_t a, uint32x2_t b, uint32_t c) +{ + uint32x2_t result; + __asm__ ("mls %0.2s, %2.2s, %3.s[0]" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vmls_s8 (int8x8_t a, int8x8_t b, int8x8_t c) +{ + int8x8_t result; + __asm__ ("mls %0.8b,%2.8b,%3.8b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vmls_s16 (int16x4_t a, int16x4_t b, int16x4_t c) +{ + int16x4_t result; + __asm__ ("mls %0.4h,%2.4h,%3.4h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vmls_s32 (int32x2_t a, int32x2_t b, int32x2_t c) +{ + int32x2_t result; + __asm__ ("mls %0.2s,%2.2s,%3.2s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vmls_u8 (uint8x8_t a, uint8x8_t b, uint8x8_t c) +{ + uint8x8_t result; + __asm__ ("mls %0.8b,%2.8b,%3.8b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vmls_u16 (uint16x4_t a, uint16x4_t b, uint16x4_t c) +{ + uint16x4_t result; + __asm__ ("mls %0.4h,%2.4h,%3.4h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vmls_u32 (uint32x2_t a, uint32x2_t b, uint32x2_t c) +{ + uint32x2_t result; + __asm__ ("mls %0.2s,%2.2s,%3.2s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +#define vmlsl_high_lane_s16(a, b, c, d) \ + __extension__ \ + ({ \ + int16x8_t c_ = (c); \ + int16x8_t b_ = (b); \ + int32x4_t a_ = (a); \ + int32x4_t result; \ + __asm__ ("smlsl2 %0.4s, %2.8h, %3.h[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "x"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlsl_high_lane_s32(a, b, c, d) \ + __extension__ \ + ({ \ + int32x4_t c_ = (c); \ + int32x4_t b_ = (b); \ + int64x2_t a_ = (a); \ + int64x2_t result; \ + __asm__ ("smlsl2 %0.2d, %2.4s, %3.s[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlsl_high_lane_u16(a, b, c, d) \ + __extension__ \ + ({ \ + uint16x8_t c_ = (c); \ + uint16x8_t b_ = (b); \ + uint32x4_t a_ = (a); \ + uint32x4_t result; \ + __asm__ ("umlsl2 %0.4s, %2.8h, %3.h[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "x"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlsl_high_lane_u32(a, b, c, d) \ + __extension__ \ + ({ \ + uint32x4_t c_ = (c); \ + uint32x4_t b_ = (b); \ + uint64x2_t a_ = (a); \ + uint64x2_t result; \ + __asm__ ("umlsl2 %0.2d, %2.4s, %3.s[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlsl_high_laneq_s16(a, b, c, d) \ + __extension__ \ + ({ \ + int16x8_t c_ = (c); \ + int16x8_t b_ = (b); \ + int32x4_t a_ = (a); \ + int32x4_t result; \ + __asm__ ("smlsl2 %0.4s, %2.8h, %3.h[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "x"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlsl_high_laneq_s32(a, b, c, d) \ + __extension__ \ + ({ \ + int32x4_t c_ = (c); \ + int32x4_t b_ = (b); \ + int64x2_t a_ = (a); \ + int64x2_t result; \ + __asm__ ("smlsl2 %0.2d, %2.4s, %3.s[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlsl_high_laneq_u16(a, b, c, d) \ + __extension__ \ + ({ \ + uint16x8_t c_ = (c); \ + uint16x8_t b_ = (b); \ + uint32x4_t a_ = (a); \ + uint32x4_t result; \ + __asm__ ("umlsl2 %0.4s, %2.8h, %3.h[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "x"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlsl_high_laneq_u32(a, b, c, d) \ + __extension__ \ + ({ \ + uint32x4_t c_ = (c); \ + uint32x4_t b_ = (b); \ + uint64x2_t a_ = (a); \ + uint64x2_t result; \ + __asm__ ("umlsl2 %0.2d, %2.4s, %3.s[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmlsl_high_n_s16 (int32x4_t a, int16x8_t b, int16_t c) +{ + int32x4_t result; + __asm__ ("smlsl2 %0.4s, %2.8h, %3.h[0]" + : "=w"(result) + : "0"(a), "w"(b), "x"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vmlsl_high_n_s32 (int64x2_t a, int32x4_t b, int32_t c) +{ + int64x2_t result; + __asm__ ("smlsl2 %0.2d, %2.4s, %3.s[0]" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmlsl_high_n_u16 (uint32x4_t a, uint16x8_t b, uint16_t c) +{ + uint32x4_t result; + __asm__ ("umlsl2 %0.4s, %2.8h, %3.h[0]" + : "=w"(result) + : "0"(a), "w"(b), "x"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vmlsl_high_n_u32 (uint64x2_t a, uint32x4_t b, uint32_t c) +{ + uint64x2_t result; + __asm__ ("umlsl2 %0.2d, %2.4s, %3.s[0]" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmlsl_high_s8 (int16x8_t a, int8x16_t b, int8x16_t c) +{ + int16x8_t result; + __asm__ ("smlsl2 %0.8h,%2.16b,%3.16b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmlsl_high_s16 (int32x4_t a, int16x8_t b, int16x8_t c) +{ + int32x4_t result; + __asm__ ("smlsl2 %0.4s,%2.8h,%3.8h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vmlsl_high_s32 (int64x2_t a, int32x4_t b, int32x4_t c) +{ + int64x2_t result; + __asm__ ("smlsl2 %0.2d,%2.4s,%3.4s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmlsl_high_u8 (uint16x8_t a, uint8x16_t b, uint8x16_t c) +{ + uint16x8_t result; + __asm__ ("umlsl2 %0.8h,%2.16b,%3.16b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmlsl_high_u16 (uint32x4_t a, uint16x8_t b, uint16x8_t c) +{ + uint32x4_t result; + __asm__ ("umlsl2 %0.4s,%2.8h,%3.8h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vmlsl_high_u32 (uint64x2_t a, uint32x4_t b, uint32x4_t c) +{ + uint64x2_t result; + __asm__ ("umlsl2 %0.2d,%2.4s,%3.4s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +#define vmlsl_lane_s16(a, b, c, d) \ + __extension__ \ + ({ \ + int16x4_t c_ = (c); \ + int16x4_t b_ = (b); \ + int32x4_t a_ = (a); \ + int32x4_t result; \ + __asm__ ("smlsl %0.4s, %2.4h, %3.h[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "x"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlsl_lane_s32(a, b, c, d) \ + __extension__ \ + ({ \ + int32x2_t c_ = (c); \ + int32x2_t b_ = (b); \ + int64x2_t a_ = (a); \ + int64x2_t result; \ + __asm__ ("smlsl %0.2d, %2.2s, %3.s[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlsl_lane_u16(a, b, c, d) \ + __extension__ \ + ({ \ + uint16x4_t c_ = (c); \ + uint16x4_t b_ = (b); \ + uint32x4_t a_ = (a); \ + uint32x4_t result; \ + __asm__ ("umlsl %0.4s, %2.4h, %3.h[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "x"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlsl_lane_u32(a, b, c, d) \ + __extension__ \ + ({ \ + uint32x2_t c_ = (c); \ + uint32x2_t b_ = (b); \ + uint64x2_t a_ = (a); \ + uint64x2_t result; \ + __asm__ ("umlsl %0.2d, %2.2s, %3.s[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlsl_laneq_s16(a, b, c, d) \ + __extension__ \ + ({ \ + int16x8_t c_ = (c); \ + int16x4_t b_ = (b); \ + int32x4_t a_ = (a); \ + int32x4_t result; \ + __asm__ ("smlsl %0.4s, %2.4h, %3.h[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "x"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlsl_laneq_s32(a, b, c, d) \ + __extension__ \ + ({ \ + int32x4_t c_ = (c); \ + int32x2_t b_ = (b); \ + int64x2_t a_ = (a); \ + int64x2_t result; \ + __asm__ ("smlsl %0.2d, %2.2s, %3.s[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlsl_laneq_u16(a, b, c, d) \ + __extension__ \ + ({ \ + uint16x8_t c_ = (c); \ + uint16x4_t b_ = (b); \ + uint32x4_t a_ = (a); \ + uint32x4_t result; \ + __asm__ ("umlsl %0.4s, %2.4h, %3.h[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "x"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmlsl_laneq_u32(a, b, c, d) \ + __extension__ \ + ({ \ + uint32x4_t c_ = (c); \ + uint32x2_t b_ = (b); \ + uint64x2_t a_ = (a); \ + uint64x2_t result; \ + __asm__ ("umlsl %0.2d, %2.2s, %3.s[%4]" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "w"(c_), "i"(d) \ + : /* No clobbers */); \ + result; \ + }) + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmlsl_n_s16 (int32x4_t a, int16x4_t b, int16_t c) +{ + int32x4_t result; + __asm__ ("smlsl %0.4s, %2.4h, %3.h[0]" + : "=w"(result) + : "0"(a), "w"(b), "x"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vmlsl_n_s32 (int64x2_t a, int32x2_t b, int32_t c) +{ + int64x2_t result; + __asm__ ("smlsl %0.2d, %2.2s, %3.s[0]" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmlsl_n_u16 (uint32x4_t a, uint16x4_t b, uint16_t c) +{ + uint32x4_t result; + __asm__ ("umlsl %0.4s, %2.4h, %3.h[0]" + : "=w"(result) + : "0"(a), "w"(b), "x"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vmlsl_n_u32 (uint64x2_t a, uint32x2_t b, uint32_t c) +{ + uint64x2_t result; + __asm__ ("umlsl %0.2d, %2.2s, %3.s[0]" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmlsl_s8 (int16x8_t a, int8x8_t b, int8x8_t c) +{ + int16x8_t result; + __asm__ ("smlsl %0.8h, %2.8b, %3.8b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmlsl_s16 (int32x4_t a, int16x4_t b, int16x4_t c) +{ + int32x4_t result; + __asm__ ("smlsl %0.4s, %2.4h, %3.4h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vmlsl_s32 (int64x2_t a, int32x2_t b, int32x2_t c) +{ + int64x2_t result; + __asm__ ("smlsl %0.2d, %2.2s, %3.2s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmlsl_u8 (uint16x8_t a, uint8x8_t b, uint8x8_t c) +{ + uint16x8_t result; + __asm__ ("umlsl %0.8h, %2.8b, %3.8b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmlsl_u16 (uint32x4_t a, uint16x4_t b, uint16x4_t c) +{ + uint32x4_t result; + __asm__ ("umlsl %0.4s, %2.4h, %3.4h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vmlsl_u32 (uint64x2_t a, uint32x2_t b, uint32x2_t c) +{ + uint64x2_t result; + __asm__ ("umlsl %0.2d, %2.2s, %3.2s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vmlsq_n_f32 (float32x4_t a, float32x4_t b, float32_t c) +{ + float32x4_t result; + float32x4_t t1; + __asm__ ("fmul %1.4s, %3.4s, %4.s[0]; fsub %0.4s, %0.4s, %1.4s" + : "=w"(result), "=w"(t1) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vmlsq_n_f64 (float64x2_t a, float64x2_t b, float64_t c) +{ + float64x2_t result; + float64x2_t t1; + __asm__ ("fmul %1.2d, %3.2d, %4.d[0]; fsub %0.2d, %0.2d, %1.2d" + : "=w"(result), "=w"(t1) + : "0"(a), "w"(b), "x"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmlsq_n_s16 (int16x8_t a, int16x8_t b, int16_t c) +{ + int16x8_t result; + __asm__ ("mls %0.8h, %2.8h, %3.h[0]" + : "=w"(result) + : "0"(a), "w"(b), "x"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmlsq_n_s32 (int32x4_t a, int32x4_t b, int32_t c) +{ + int32x4_t result; + __asm__ ("mls %0.4s, %2.4s, %3.s[0]" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmlsq_n_u16 (uint16x8_t a, uint16x8_t b, uint16_t c) +{ + uint16x8_t result; + __asm__ ("mls %0.8h, %2.8h, %3.h[0]" + : "=w"(result) + : "0"(a), "w"(b), "x"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmlsq_n_u32 (uint32x4_t a, uint32x4_t b, uint32_t c) +{ + uint32x4_t result; + __asm__ ("mls %0.4s, %2.4s, %3.s[0]" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vmlsq_s8 (int8x16_t a, int8x16_t b, int8x16_t c) +{ + int8x16_t result; + __asm__ ("mls %0.16b,%2.16b,%3.16b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmlsq_s16 (int16x8_t a, int16x8_t b, int16x8_t c) +{ + int16x8_t result; + __asm__ ("mls %0.8h,%2.8h,%3.8h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmlsq_s32 (int32x4_t a, int32x4_t b, int32x4_t c) +{ + int32x4_t result; + __asm__ ("mls %0.4s,%2.4s,%3.4s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vmlsq_u8 (uint8x16_t a, uint8x16_t b, uint8x16_t c) +{ + uint8x16_t result; + __asm__ ("mls %0.16b,%2.16b,%3.16b" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmlsq_u16 (uint16x8_t a, uint16x8_t b, uint16x8_t c) +{ + uint16x8_t result; + __asm__ ("mls %0.8h,%2.8h,%3.8h" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmlsq_u32 (uint32x4_t a, uint32x4_t b, uint32x4_t c) +{ + uint32x4_t result; + __asm__ ("mls %0.4s,%2.4s,%3.4s" + : "=w"(result) + : "0"(a), "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmovl_high_s8 (int8x16_t a) +{ + int16x8_t result; + __asm__ ("sshll2 %0.8h,%1.16b,#0" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmovl_high_s16 (int16x8_t a) +{ + int32x4_t result; + __asm__ ("sshll2 %0.4s,%1.8h,#0" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vmovl_high_s32 (int32x4_t a) +{ + int64x2_t result; + __asm__ ("sshll2 %0.2d,%1.4s,#0" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmovl_high_u8 (uint8x16_t a) +{ + uint16x8_t result; + __asm__ ("ushll2 %0.8h,%1.16b,#0" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmovl_high_u16 (uint16x8_t a) +{ + uint32x4_t result; + __asm__ ("ushll2 %0.4s,%1.8h,#0" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vmovl_high_u32 (uint32x4_t a) +{ + uint64x2_t result; + __asm__ ("ushll2 %0.2d,%1.4s,#0" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmovl_s8 (int8x8_t a) +{ + int16x8_t result; + __asm__ ("sshll %0.8h,%1.8b,#0" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmovl_s16 (int16x4_t a) +{ + int32x4_t result; + __asm__ ("sshll %0.4s,%1.4h,#0" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vmovl_s32 (int32x2_t a) +{ + int64x2_t result; + __asm__ ("sshll %0.2d,%1.2s,#0" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmovl_u8 (uint8x8_t a) +{ + uint16x8_t result; + __asm__ ("ushll %0.8h,%1.8b,#0" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmovl_u16 (uint16x4_t a) +{ + uint32x4_t result; + __asm__ ("ushll %0.4s,%1.4h,#0" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vmovl_u32 (uint32x2_t a) +{ + uint64x2_t result; + __asm__ ("ushll %0.2d,%1.2s,#0" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vmovn_high_s16 (int8x8_t a, int16x8_t b) +{ + int8x16_t result = vcombine_s8 (a, vcreate_s8 (__AARCH64_UINT64_C (0x0))); + __asm__ ("xtn2 %0.16b,%1.8h" + : "+w"(result) + : "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmovn_high_s32 (int16x4_t a, int32x4_t b) +{ + int16x8_t result = vcombine_s16 (a, vcreate_s16 (__AARCH64_UINT64_C (0x0))); + __asm__ ("xtn2 %0.8h,%1.4s" + : "+w"(result) + : "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmovn_high_s64 (int32x2_t a, int64x2_t b) +{ + int32x4_t result = vcombine_s32 (a, vcreate_s32 (__AARCH64_UINT64_C (0x0))); + __asm__ ("xtn2 %0.4s,%1.2d" + : "+w"(result) + : "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vmovn_high_u16 (uint8x8_t a, uint16x8_t b) +{ + uint8x16_t result = vcombine_u8 (a, vcreate_u8 (__AARCH64_UINT64_C (0x0))); + __asm__ ("xtn2 %0.16b,%1.8h" + : "+w"(result) + : "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmovn_high_u32 (uint16x4_t a, uint32x4_t b) +{ + uint16x8_t result = vcombine_u16 (a, vcreate_u16 (__AARCH64_UINT64_C (0x0))); + __asm__ ("xtn2 %0.8h,%1.4s" + : "+w"(result) + : "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmovn_high_u64 (uint32x2_t a, uint64x2_t b) +{ + uint32x4_t result = vcombine_u32 (a, vcreate_u32 (__AARCH64_UINT64_C (0x0))); + __asm__ ("xtn2 %0.4s,%1.2d" + : "+w"(result) + : "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vmovn_s16 (int16x8_t a) +{ + int8x8_t result; + __asm__ ("xtn %0.8b,%1.8h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vmovn_s32 (int32x4_t a) +{ + int16x4_t result; + __asm__ ("xtn %0.4h,%1.4s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vmovn_s64 (int64x2_t a) +{ + int32x2_t result; + __asm__ ("xtn %0.2s,%1.2d" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vmovn_u16 (uint16x8_t a) +{ + uint8x8_t result; + __asm__ ("xtn %0.8b,%1.8h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vmovn_u32 (uint32x4_t a) +{ + uint16x4_t result; + __asm__ ("xtn %0.4h,%1.4s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vmovn_u64 (uint64x2_t a) +{ + uint32x2_t result; + __asm__ ("xtn %0.2s,%1.2d" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vmul_n_f32 (float32x2_t a, float32_t b) +{ + float32x2_t result; + __asm__ ("fmul %0.2s,%1.2s,%2.s[0]" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vmul_n_s16 (int16x4_t a, int16_t b) +{ + int16x4_t result; + __asm__ ("mul %0.4h,%1.4h,%2.h[0]" + : "=w"(result) + : "w"(a), "x"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vmul_n_s32 (int32x2_t a, int32_t b) +{ + int32x2_t result; + __asm__ ("mul %0.2s,%1.2s,%2.s[0]" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vmul_n_u16 (uint16x4_t a, uint16_t b) +{ + uint16x4_t result; + __asm__ ("mul %0.4h,%1.4h,%2.h[0]" + : "=w"(result) + : "w"(a), "x"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vmul_n_u32 (uint32x2_t a, uint32_t b) +{ + uint32x2_t result; + __asm__ ("mul %0.2s,%1.2s,%2.s[0]" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +#define vmuld_lane_f64(a, b, c) \ + __extension__ \ + ({ \ + float64x2_t b_ = (b); \ + float64_t a_ = (a); \ + float64_t result; \ + __asm__ ("fmul %d0,%d1,%2.d[%3]" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmull_high_lane_s16(a, b, c) \ + __extension__ \ + ({ \ + int16x4_t b_ = (b); \ + int16x8_t a_ = (a); \ + int32x4_t result; \ + __asm__ ("smull2 %0.4s, %1.8h, %2.h[%3]" \ + : "=w"(result) \ + : "w"(a_), "x"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmull_high_lane_s32(a, b, c) \ + __extension__ \ + ({ \ + int32x2_t b_ = (b); \ + int32x4_t a_ = (a); \ + int64x2_t result; \ + __asm__ ("smull2 %0.2d, %1.4s, %2.s[%3]" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmull_high_lane_u16(a, b, c) \ + __extension__ \ + ({ \ + uint16x4_t b_ = (b); \ + uint16x8_t a_ = (a); \ + uint32x4_t result; \ + __asm__ ("umull2 %0.4s, %1.8h, %2.h[%3]" \ + : "=w"(result) \ + : "w"(a_), "x"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmull_high_lane_u32(a, b, c) \ + __extension__ \ + ({ \ + uint32x2_t b_ = (b); \ + uint32x4_t a_ = (a); \ + uint64x2_t result; \ + __asm__ ("umull2 %0.2d, %1.4s, %2.s[%3]" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmull_high_laneq_s16(a, b, c) \ + __extension__ \ + ({ \ + int16x8_t b_ = (b); \ + int16x8_t a_ = (a); \ + int32x4_t result; \ + __asm__ ("smull2 %0.4s, %1.8h, %2.h[%3]" \ + : "=w"(result) \ + : "w"(a_), "x"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmull_high_laneq_s32(a, b, c) \ + __extension__ \ + ({ \ + int32x4_t b_ = (b); \ + int32x4_t a_ = (a); \ + int64x2_t result; \ + __asm__ ("smull2 %0.2d, %1.4s, %2.s[%3]" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmull_high_laneq_u16(a, b, c) \ + __extension__ \ + ({ \ + uint16x8_t b_ = (b); \ + uint16x8_t a_ = (a); \ + uint32x4_t result; \ + __asm__ ("umull2 %0.4s, %1.8h, %2.h[%3]" \ + : "=w"(result) \ + : "w"(a_), "x"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmull_high_laneq_u32(a, b, c) \ + __extension__ \ + ({ \ + uint32x4_t b_ = (b); \ + uint32x4_t a_ = (a); \ + uint64x2_t result; \ + __asm__ ("umull2 %0.2d, %1.4s, %2.s[%3]" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmull_high_n_s16 (int16x8_t a, int16_t b) +{ + int32x4_t result; + __asm__ ("smull2 %0.4s,%1.8h,%2.h[0]" + : "=w"(result) + : "w"(a), "x"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vmull_high_n_s32 (int32x4_t a, int32_t b) +{ + int64x2_t result; + __asm__ ("smull2 %0.2d,%1.4s,%2.s[0]" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmull_high_n_u16 (uint16x8_t a, uint16_t b) +{ + uint32x4_t result; + __asm__ ("umull2 %0.4s,%1.8h,%2.h[0]" + : "=w"(result) + : "w"(a), "x"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vmull_high_n_u32 (uint32x4_t a, uint32_t b) +{ + uint64x2_t result; + __asm__ ("umull2 %0.2d,%1.4s,%2.s[0]" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vmull_high_p8 (poly8x16_t a, poly8x16_t b) +{ + poly16x8_t result; + __asm__ ("pmull2 %0.8h,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmull_high_s8 (int8x16_t a, int8x16_t b) +{ + int16x8_t result; + __asm__ ("smull2 %0.8h,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmull_high_s16 (int16x8_t a, int16x8_t b) +{ + int32x4_t result; + __asm__ ("smull2 %0.4s,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vmull_high_s32 (int32x4_t a, int32x4_t b) +{ + int64x2_t result; + __asm__ ("smull2 %0.2d,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmull_high_u8 (uint8x16_t a, uint8x16_t b) +{ + uint16x8_t result; + __asm__ ("umull2 %0.8h,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmull_high_u16 (uint16x8_t a, uint16x8_t b) +{ + uint32x4_t result; + __asm__ ("umull2 %0.4s,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vmull_high_u32 (uint32x4_t a, uint32x4_t b) +{ + uint64x2_t result; + __asm__ ("umull2 %0.2d,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +#define vmull_lane_s16(a, b, c) \ + __extension__ \ + ({ \ + int16x4_t b_ = (b); \ + int16x4_t a_ = (a); \ + int32x4_t result; \ + __asm__ ("smull %0.4s,%1.4h,%2.h[%3]" \ + : "=w"(result) \ + : "w"(a_), "x"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmull_lane_s32(a, b, c) \ + __extension__ \ + ({ \ + int32x2_t b_ = (b); \ + int32x2_t a_ = (a); \ + int64x2_t result; \ + __asm__ ("smull %0.2d,%1.2s,%2.s[%3]" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmull_lane_u16(a, b, c) \ + __extension__ \ + ({ \ + uint16x4_t b_ = (b); \ + uint16x4_t a_ = (a); \ + uint32x4_t result; \ + __asm__ ("umull %0.4s,%1.4h,%2.h[%3]" \ + : "=w"(result) \ + : "w"(a_), "x"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmull_lane_u32(a, b, c) \ + __extension__ \ + ({ \ + uint32x2_t b_ = (b); \ + uint32x2_t a_ = (a); \ + uint64x2_t result; \ + __asm__ ("umull %0.2d, %1.2s, %2.s[%3]" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmull_laneq_s16(a, b, c) \ + __extension__ \ + ({ \ + int16x8_t b_ = (b); \ + int16x4_t a_ = (a); \ + int32x4_t result; \ + __asm__ ("smull %0.4s, %1.4h, %2.h[%3]" \ + : "=w"(result) \ + : "w"(a_), "x"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmull_laneq_s32(a, b, c) \ + __extension__ \ + ({ \ + int32x4_t b_ = (b); \ + int32x2_t a_ = (a); \ + int64x2_t result; \ + __asm__ ("smull %0.2d, %1.2s, %2.s[%3]" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmull_laneq_u16(a, b, c) \ + __extension__ \ + ({ \ + uint16x8_t b_ = (b); \ + uint16x4_t a_ = (a); \ + uint32x4_t result; \ + __asm__ ("umull %0.4s, %1.4h, %2.h[%3]" \ + : "=w"(result) \ + : "w"(a_), "x"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmull_laneq_u32(a, b, c) \ + __extension__ \ + ({ \ + uint32x4_t b_ = (b); \ + uint32x2_t a_ = (a); \ + uint64x2_t result; \ + __asm__ ("umull %0.2d, %1.2s, %2.s[%3]" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmull_n_s16 (int16x4_t a, int16_t b) +{ + int32x4_t result; + __asm__ ("smull %0.4s,%1.4h,%2.h[0]" + : "=w"(result) + : "w"(a), "x"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vmull_n_s32 (int32x2_t a, int32_t b) +{ + int64x2_t result; + __asm__ ("smull %0.2d,%1.2s,%2.s[0]" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmull_n_u16 (uint16x4_t a, uint16_t b) +{ + uint32x4_t result; + __asm__ ("umull %0.4s,%1.4h,%2.h[0]" + : "=w"(result) + : "w"(a), "x"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vmull_n_u32 (uint32x2_t a, uint32_t b) +{ + uint64x2_t result; + __asm__ ("umull %0.2d,%1.2s,%2.s[0]" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vmull_p8 (poly8x8_t a, poly8x8_t b) +{ + poly16x8_t result; + __asm__ ("pmull %0.8h, %1.8b, %2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmull_s8 (int8x8_t a, int8x8_t b) +{ + int16x8_t result; + __asm__ ("smull %0.8h, %1.8b, %2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmull_s16 (int16x4_t a, int16x4_t b) +{ + int32x4_t result; + __asm__ ("smull %0.4s, %1.4h, %2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vmull_s32 (int32x2_t a, int32x2_t b) +{ + int64x2_t result; + __asm__ ("smull %0.2d, %1.2s, %2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmull_u8 (uint8x8_t a, uint8x8_t b) +{ + uint16x8_t result; + __asm__ ("umull %0.8h, %1.8b, %2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmull_u16 (uint16x4_t a, uint16x4_t b) +{ + uint32x4_t result; + __asm__ ("umull %0.4s, %1.4h, %2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vmull_u32 (uint32x2_t a, uint32x2_t b) +{ + uint64x2_t result; + __asm__ ("umull %0.2d, %1.2s, %2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vmulq_n_f32 (float32x4_t a, float32_t b) +{ + float32x4_t result; + __asm__ ("fmul %0.4s,%1.4s,%2.s[0]" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vmulq_n_f64 (float64x2_t a, float64_t b) +{ + float64x2_t result; + __asm__ ("fmul %0.2d,%1.2d,%2.d[0]" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmulq_n_s16 (int16x8_t a, int16_t b) +{ + int16x8_t result; + __asm__ ("mul %0.8h,%1.8h,%2.h[0]" + : "=w"(result) + : "w"(a), "x"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmulq_n_s32 (int32x4_t a, int32_t b) +{ + int32x4_t result; + __asm__ ("mul %0.4s,%1.4s,%2.s[0]" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmulq_n_u16 (uint16x8_t a, uint16_t b) +{ + uint16x8_t result; + __asm__ ("mul %0.8h,%1.8h,%2.h[0]" + : "=w"(result) + : "w"(a), "x"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmulq_n_u32 (uint32x4_t a, uint32_t b) +{ + uint32x4_t result; + __asm__ ("mul %0.4s,%1.4s,%2.s[0]" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +#define vmuls_lane_f32(a, b, c) \ + __extension__ \ + ({ \ + float32x4_t b_ = (b); \ + float32_t a_ = (a); \ + float32_t result; \ + __asm__ ("fmul %s0,%s1,%2.s[%3]" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vmulx_f32 (float32x2_t a, float32x2_t b) +{ + float32x2_t result; + __asm__ ("fmulx %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +#define vmulx_lane_f32(a, b, c) \ + __extension__ \ + ({ \ + float32x4_t b_ = (b); \ + float32x2_t a_ = (a); \ + float32x2_t result; \ + __asm__ ("fmulx %0.2s,%1.2s,%2.s[%3]" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vmulxd_f64 (float64_t a, float64_t b) +{ + float64_t result; + __asm__ ("fmulx %d0, %d1, %d2" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vmulxq_f32 (float32x4_t a, float32x4_t b) +{ + float32x4_t result; + __asm__ ("fmulx %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vmulxq_f64 (float64x2_t a, float64x2_t b) +{ + float64x2_t result; + __asm__ ("fmulx %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +#define vmulxq_lane_f32(a, b, c) \ + __extension__ \ + ({ \ + float32x4_t b_ = (b); \ + float32x4_t a_ = (a); \ + float32x4_t result; \ + __asm__ ("fmulx %0.4s,%1.4s,%2.s[%3]" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vmulxq_lane_f64(a, b, c) \ + __extension__ \ + ({ \ + float64x2_t b_ = (b); \ + float64x2_t a_ = (a); \ + float64x2_t result; \ + __asm__ ("fmulx %0.2d,%1.2d,%2.d[%3]" \ + : "=w"(result) \ + : "w"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vmulxs_f32 (float32_t a, float32_t b) +{ + float32_t result; + __asm__ ("fmulx %s0, %s1, %s2" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vmvn_p8 (poly8x8_t a) +{ + poly8x8_t result; + __asm__ ("mvn %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vmvn_s8 (int8x8_t a) +{ + int8x8_t result; + __asm__ ("mvn %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vmvn_s16 (int16x4_t a) +{ + int16x4_t result; + __asm__ ("mvn %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vmvn_s32 (int32x2_t a) +{ + int32x2_t result; + __asm__ ("mvn %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vmvn_u8 (uint8x8_t a) +{ + uint8x8_t result; + __asm__ ("mvn %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vmvn_u16 (uint16x4_t a) +{ + uint16x4_t result; + __asm__ ("mvn %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vmvn_u32 (uint32x2_t a) +{ + uint32x2_t result; + __asm__ ("mvn %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vmvnq_p8 (poly8x16_t a) +{ + poly8x16_t result; + __asm__ ("mvn %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vmvnq_s8 (int8x16_t a) +{ + int8x16_t result; + __asm__ ("mvn %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmvnq_s16 (int16x8_t a) +{ + int16x8_t result; + __asm__ ("mvn %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmvnq_s32 (int32x4_t a) +{ + int32x4_t result; + __asm__ ("mvn %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vmvnq_u8 (uint8x16_t a) +{ + uint8x16_t result; + __asm__ ("mvn %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmvnq_u16 (uint16x8_t a) +{ + uint16x8_t result; + __asm__ ("mvn %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmvnq_u32 (uint32x4_t a) +{ + uint32x4_t result; + __asm__ ("mvn %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vpadal_s8 (int16x4_t a, int8x8_t b) +{ + int16x4_t result; + __asm__ ("sadalp %0.4h,%2.8b" + : "=w"(result) + : "0"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vpadal_s16 (int32x2_t a, int16x4_t b) +{ + int32x2_t result; + __asm__ ("sadalp %0.2s,%2.4h" + : "=w"(result) + : "0"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vpadal_s32 (int64x1_t a, int32x2_t b) +{ + int64x1_t result; + __asm__ ("sadalp %0.1d,%2.2s" + : "=w"(result) + : "0"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vpadal_u8 (uint16x4_t a, uint8x8_t b) +{ + uint16x4_t result; + __asm__ ("uadalp %0.4h,%2.8b" + : "=w"(result) + : "0"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vpadal_u16 (uint32x2_t a, uint16x4_t b) +{ + uint32x2_t result; + __asm__ ("uadalp %0.2s,%2.4h" + : "=w"(result) + : "0"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vpadal_u32 (uint64x1_t a, uint32x2_t b) +{ + uint64x1_t result; + __asm__ ("uadalp %0.1d,%2.2s" + : "=w"(result) + : "0"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vpadalq_s8 (int16x8_t a, int8x16_t b) +{ + int16x8_t result; + __asm__ ("sadalp %0.8h,%2.16b" + : "=w"(result) + : "0"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vpadalq_s16 (int32x4_t a, int16x8_t b) +{ + int32x4_t result; + __asm__ ("sadalp %0.4s,%2.8h" + : "=w"(result) + : "0"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vpadalq_s32 (int64x2_t a, int32x4_t b) +{ + int64x2_t result; + __asm__ ("sadalp %0.2d,%2.4s" + : "=w"(result) + : "0"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vpadalq_u8 (uint16x8_t a, uint8x16_t b) +{ + uint16x8_t result; + __asm__ ("uadalp %0.8h,%2.16b" + : "=w"(result) + : "0"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vpadalq_u16 (uint32x4_t a, uint16x8_t b) +{ + uint32x4_t result; + __asm__ ("uadalp %0.4s,%2.8h" + : "=w"(result) + : "0"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vpadalq_u32 (uint64x2_t a, uint32x4_t b) +{ + uint64x2_t result; + __asm__ ("uadalp %0.2d,%2.4s" + : "=w"(result) + : "0"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vpadd_f32 (float32x2_t a, float32x2_t b) +{ + float32x2_t result; + __asm__ ("faddp %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vpadd_s8 (int8x8_t __a, int8x8_t __b) +{ + return __builtin_aarch64_addpv8qi (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vpadd_s16 (int16x4_t __a, int16x4_t __b) +{ + return __builtin_aarch64_addpv4hi (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vpadd_s32 (int32x2_t __a, int32x2_t __b) +{ + return __builtin_aarch64_addpv2si (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vpadd_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_addpv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vpadd_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_addpv4hi ((int16x4_t) __a, + (int16x4_t) __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vpadd_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_addpv2si ((int32x2_t) __a, + (int32x2_t) __b); +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vpaddd_f64 (float64x2_t a) +{ + float64_t result; + __asm__ ("faddp %d0,%1.2d" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vpaddl_s8 (int8x8_t a) +{ + int16x4_t result; + __asm__ ("saddlp %0.4h,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vpaddl_s16 (int16x4_t a) +{ + int32x2_t result; + __asm__ ("saddlp %0.2s,%1.4h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vpaddl_s32 (int32x2_t a) +{ + int64x1_t result; + __asm__ ("saddlp %0.1d,%1.2s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vpaddl_u8 (uint8x8_t a) +{ + uint16x4_t result; + __asm__ ("uaddlp %0.4h,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vpaddl_u16 (uint16x4_t a) +{ + uint32x2_t result; + __asm__ ("uaddlp %0.2s,%1.4h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vpaddl_u32 (uint32x2_t a) +{ + uint64x1_t result; + __asm__ ("uaddlp %0.1d,%1.2s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vpaddlq_s8 (int8x16_t a) +{ + int16x8_t result; + __asm__ ("saddlp %0.8h,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vpaddlq_s16 (int16x8_t a) +{ + int32x4_t result; + __asm__ ("saddlp %0.4s,%1.8h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vpaddlq_s32 (int32x4_t a) +{ + int64x2_t result; + __asm__ ("saddlp %0.2d,%1.4s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vpaddlq_u8 (uint8x16_t a) +{ + uint16x8_t result; + __asm__ ("uaddlp %0.8h,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vpaddlq_u16 (uint16x8_t a) +{ + uint32x4_t result; + __asm__ ("uaddlp %0.4s,%1.8h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vpaddlq_u32 (uint32x4_t a) +{ + uint64x2_t result; + __asm__ ("uaddlp %0.2d,%1.4s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vpaddq_f32 (float32x4_t a, float32x4_t b) +{ + float32x4_t result; + __asm__ ("faddp %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vpaddq_f64 (float64x2_t a, float64x2_t b) +{ + float64x2_t result; + __asm__ ("faddp %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vpaddq_s8 (int8x16_t a, int8x16_t b) +{ + int8x16_t result; + __asm__ ("addp %0.16b,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vpaddq_s16 (int16x8_t a, int16x8_t b) +{ + int16x8_t result; + __asm__ ("addp %0.8h,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vpaddq_s32 (int32x4_t a, int32x4_t b) +{ + int32x4_t result; + __asm__ ("addp %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vpaddq_s64 (int64x2_t a, int64x2_t b) +{ + int64x2_t result; + __asm__ ("addp %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vpaddq_u8 (uint8x16_t a, uint8x16_t b) +{ + uint8x16_t result; + __asm__ ("addp %0.16b,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vpaddq_u16 (uint16x8_t a, uint16x8_t b) +{ + uint16x8_t result; + __asm__ ("addp %0.8h,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vpaddq_u32 (uint32x4_t a, uint32x4_t b) +{ + uint32x4_t result; + __asm__ ("addp %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vpaddq_u64 (uint64x2_t a, uint64x2_t b) +{ + uint64x2_t result; + __asm__ ("addp %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vpadds_f32 (float32x2_t a) +{ + float32_t result; + __asm__ ("faddp %s0,%1.2s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vpmax_f32 (float32x2_t a, float32x2_t b) +{ + float32x2_t result; + __asm__ ("fmaxp %0.2s, %1.2s, %2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vpmax_s8 (int8x8_t a, int8x8_t b) +{ + int8x8_t result; + __asm__ ("smaxp %0.8b, %1.8b, %2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vpmax_s16 (int16x4_t a, int16x4_t b) +{ + int16x4_t result; + __asm__ ("smaxp %0.4h, %1.4h, %2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vpmax_s32 (int32x2_t a, int32x2_t b) +{ + int32x2_t result; + __asm__ ("smaxp %0.2s, %1.2s, %2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vpmax_u8 (uint8x8_t a, uint8x8_t b) +{ + uint8x8_t result; + __asm__ ("umaxp %0.8b, %1.8b, %2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vpmax_u16 (uint16x4_t a, uint16x4_t b) +{ + uint16x4_t result; + __asm__ ("umaxp %0.4h, %1.4h, %2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vpmax_u32 (uint32x2_t a, uint32x2_t b) +{ + uint32x2_t result; + __asm__ ("umaxp %0.2s, %1.2s, %2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vpmaxnm_f32 (float32x2_t a, float32x2_t b) +{ + float32x2_t result; + __asm__ ("fmaxnmp %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vpmaxnmq_f32 (float32x4_t a, float32x4_t b) +{ + float32x4_t result; + __asm__ ("fmaxnmp %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vpmaxnmq_f64 (float64x2_t a, float64x2_t b) +{ + float64x2_t result; + __asm__ ("fmaxnmp %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vpmaxnmqd_f64 (float64x2_t a) +{ + float64_t result; + __asm__ ("fmaxnmp %d0,%1.2d" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vpmaxnms_f32 (float32x2_t a) +{ + float32_t result; + __asm__ ("fmaxnmp %s0,%1.2s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vpmaxq_f32 (float32x4_t a, float32x4_t b) +{ + float32x4_t result; + __asm__ ("fmaxp %0.4s, %1.4s, %2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vpmaxq_f64 (float64x2_t a, float64x2_t b) +{ + float64x2_t result; + __asm__ ("fmaxp %0.2d, %1.2d, %2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vpmaxq_s8 (int8x16_t a, int8x16_t b) +{ + int8x16_t result; + __asm__ ("smaxp %0.16b, %1.16b, %2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vpmaxq_s16 (int16x8_t a, int16x8_t b) +{ + int16x8_t result; + __asm__ ("smaxp %0.8h, %1.8h, %2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vpmaxq_s32 (int32x4_t a, int32x4_t b) +{ + int32x4_t result; + __asm__ ("smaxp %0.4s, %1.4s, %2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vpmaxq_u8 (uint8x16_t a, uint8x16_t b) +{ + uint8x16_t result; + __asm__ ("umaxp %0.16b, %1.16b, %2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vpmaxq_u16 (uint16x8_t a, uint16x8_t b) +{ + uint16x8_t result; + __asm__ ("umaxp %0.8h, %1.8h, %2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vpmaxq_u32 (uint32x4_t a, uint32x4_t b) +{ + uint32x4_t result; + __asm__ ("umaxp %0.4s, %1.4s, %2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vpmaxqd_f64 (float64x2_t a) +{ + float64_t result; + __asm__ ("fmaxp %d0,%1.2d" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vpmaxs_f32 (float32x2_t a) +{ + float32_t result; + __asm__ ("fmaxp %s0,%1.2s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vpmin_f32 (float32x2_t a, float32x2_t b) +{ + float32x2_t result; + __asm__ ("fminp %0.2s, %1.2s, %2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vpmin_s8 (int8x8_t a, int8x8_t b) +{ + int8x8_t result; + __asm__ ("sminp %0.8b, %1.8b, %2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vpmin_s16 (int16x4_t a, int16x4_t b) +{ + int16x4_t result; + __asm__ ("sminp %0.4h, %1.4h, %2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vpmin_s32 (int32x2_t a, int32x2_t b) +{ + int32x2_t result; + __asm__ ("sminp %0.2s, %1.2s, %2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vpmin_u8 (uint8x8_t a, uint8x8_t b) +{ + uint8x8_t result; + __asm__ ("uminp %0.8b, %1.8b, %2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vpmin_u16 (uint16x4_t a, uint16x4_t b) +{ + uint16x4_t result; + __asm__ ("uminp %0.4h, %1.4h, %2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vpmin_u32 (uint32x2_t a, uint32x2_t b) +{ + uint32x2_t result; + __asm__ ("uminp %0.2s, %1.2s, %2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vpminnm_f32 (float32x2_t a, float32x2_t b) +{ + float32x2_t result; + __asm__ ("fminnmp %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vpminnmq_f32 (float32x4_t a, float32x4_t b) +{ + float32x4_t result; + __asm__ ("fminnmp %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vpminnmq_f64 (float64x2_t a, float64x2_t b) +{ + float64x2_t result; + __asm__ ("fminnmp %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vpminnmqd_f64 (float64x2_t a) +{ + float64_t result; + __asm__ ("fminnmp %d0,%1.2d" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vpminnms_f32 (float32x2_t a) +{ + float32_t result; + __asm__ ("fminnmp %s0,%1.2s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vpminq_f32 (float32x4_t a, float32x4_t b) +{ + float32x4_t result; + __asm__ ("fminp %0.4s, %1.4s, %2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vpminq_f64 (float64x2_t a, float64x2_t b) +{ + float64x2_t result; + __asm__ ("fminp %0.2d, %1.2d, %2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vpminq_s8 (int8x16_t a, int8x16_t b) +{ + int8x16_t result; + __asm__ ("sminp %0.16b, %1.16b, %2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vpminq_s16 (int16x8_t a, int16x8_t b) +{ + int16x8_t result; + __asm__ ("sminp %0.8h, %1.8h, %2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vpminq_s32 (int32x4_t a, int32x4_t b) +{ + int32x4_t result; + __asm__ ("sminp %0.4s, %1.4s, %2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vpminq_u8 (uint8x16_t a, uint8x16_t b) +{ + uint8x16_t result; + __asm__ ("uminp %0.16b, %1.16b, %2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vpminq_u16 (uint16x8_t a, uint16x8_t b) +{ + uint16x8_t result; + __asm__ ("uminp %0.8h, %1.8h, %2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vpminq_u32 (uint32x4_t a, uint32x4_t b) +{ + uint32x4_t result; + __asm__ ("uminp %0.4s, %1.4s, %2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vpminqd_f64 (float64x2_t a) +{ + float64_t result; + __asm__ ("fminp %d0,%1.2d" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vpmins_f32 (float32x2_t a) +{ + float32_t result; + __asm__ ("fminp %s0,%1.2s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vqdmulh_n_s16 (int16x4_t a, int16_t b) +{ + int16x4_t result; + __asm__ ("sqdmulh %0.4h,%1.4h,%2.h[0]" + : "=w"(result) + : "w"(a), "x"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vqdmulh_n_s32 (int32x2_t a, int32_t b) +{ + int32x2_t result; + __asm__ ("sqdmulh %0.2s,%1.2s,%2.s[0]" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vqdmulhq_n_s16 (int16x8_t a, int16_t b) +{ + int16x8_t result; + __asm__ ("sqdmulh %0.8h,%1.8h,%2.h[0]" + : "=w"(result) + : "w"(a), "x"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmulhq_n_s32 (int32x4_t a, int32_t b) +{ + int32x4_t result; + __asm__ ("sqdmulh %0.4s,%1.4s,%2.s[0]" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vqmovn_high_s16 (int8x8_t a, int16x8_t b) +{ + int8x16_t result = vcombine_s8 (a, vcreate_s8 (__AARCH64_UINT64_C (0x0))); + __asm__ ("sqxtn2 %0.16b, %1.8h" + : "+w"(result) + : "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vqmovn_high_s32 (int16x4_t a, int32x4_t b) +{ + int16x8_t result = vcombine_s16 (a, vcreate_s16 (__AARCH64_UINT64_C (0x0))); + __asm__ ("sqxtn2 %0.8h, %1.4s" + : "+w"(result) + : "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqmovn_high_s64 (int32x2_t a, int64x2_t b) +{ + int32x4_t result = vcombine_s32 (a, vcreate_s32 (__AARCH64_UINT64_C (0x0))); + __asm__ ("sqxtn2 %0.4s, %1.2d" + : "+w"(result) + : "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vqmovn_high_u16 (uint8x8_t a, uint16x8_t b) +{ + uint8x16_t result = vcombine_u8 (a, vcreate_u8 (__AARCH64_UINT64_C (0x0))); + __asm__ ("uqxtn2 %0.16b, %1.8h" + : "+w"(result) + : "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vqmovn_high_u32 (uint16x4_t a, uint32x4_t b) +{ + uint16x8_t result = vcombine_u16 (a, vcreate_u16 (__AARCH64_UINT64_C (0x0))); + __asm__ ("uqxtn2 %0.8h, %1.4s" + : "+w"(result) + : "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vqmovn_high_u64 (uint32x2_t a, uint64x2_t b) +{ + uint32x4_t result = vcombine_u32 (a, vcreate_u32 (__AARCH64_UINT64_C (0x0))); + __asm__ ("uqxtn2 %0.4s, %1.2d" + : "+w"(result) + : "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vqmovun_high_s16 (uint8x8_t a, int16x8_t b) +{ + uint8x16_t result = vcombine_u8 (a, vcreate_u8 (__AARCH64_UINT64_C (0x0))); + __asm__ ("sqxtun2 %0.16b, %1.8h" + : "+w"(result) + : "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vqmovun_high_s32 (uint16x4_t a, int32x4_t b) +{ + uint16x8_t result = vcombine_u16 (a, vcreate_u16 (__AARCH64_UINT64_C (0x0))); + __asm__ ("sqxtun2 %0.8h, %1.4s" + : "+w"(result) + : "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vqmovun_high_s64 (uint32x2_t a, int64x2_t b) +{ + uint32x4_t result = vcombine_u32 (a, vcreate_u32 (__AARCH64_UINT64_C (0x0))); + __asm__ ("sqxtun2 %0.4s, %1.2d" + : "+w"(result) + : "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vqrdmulh_n_s16 (int16x4_t a, int16_t b) +{ + int16x4_t result; + __asm__ ("sqrdmulh %0.4h,%1.4h,%2.h[0]" + : "=w"(result) + : "w"(a), "x"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vqrdmulh_n_s32 (int32x2_t a, int32_t b) +{ + int32x2_t result; + __asm__ ("sqrdmulh %0.2s,%1.2s,%2.s[0]" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vqrdmulhq_n_s16 (int16x8_t a, int16_t b) +{ + int16x8_t result; + __asm__ ("sqrdmulh %0.8h,%1.8h,%2.h[0]" + : "=w"(result) + : "w"(a), "x"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqrdmulhq_n_s32 (int32x4_t a, int32_t b) +{ + int32x4_t result; + __asm__ ("sqrdmulh %0.4s,%1.4s,%2.s[0]" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +#define vqrshrn_high_n_s16(a, b, c) \ + __extension__ \ + ({ \ + int16x8_t b_ = (b); \ + int8x8_t a_ = (a); \ + int8x16_t result = vcombine_s8 \ + (a_, vcreate_s8 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("sqrshrn2 %0.16b, %1.8h, #%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vqrshrn_high_n_s32(a, b, c) \ + __extension__ \ + ({ \ + int32x4_t b_ = (b); \ + int16x4_t a_ = (a); \ + int16x8_t result = vcombine_s16 \ + (a_, vcreate_s16 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("sqrshrn2 %0.8h, %1.4s, #%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vqrshrn_high_n_s64(a, b, c) \ + __extension__ \ + ({ \ + int64x2_t b_ = (b); \ + int32x2_t a_ = (a); \ + int32x4_t result = vcombine_s32 \ + (a_, vcreate_s32 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("sqrshrn2 %0.4s, %1.2d, #%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vqrshrn_high_n_u16(a, b, c) \ + __extension__ \ + ({ \ + uint16x8_t b_ = (b); \ + uint8x8_t a_ = (a); \ + uint8x16_t result = vcombine_u8 \ + (a_, vcreate_u8 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("uqrshrn2 %0.16b, %1.8h, #%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vqrshrn_high_n_u32(a, b, c) \ + __extension__ \ + ({ \ + uint32x4_t b_ = (b); \ + uint16x4_t a_ = (a); \ + uint16x8_t result = vcombine_u16 \ + (a_, vcreate_u16 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("uqrshrn2 %0.8h, %1.4s, #%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vqrshrn_high_n_u64(a, b, c) \ + __extension__ \ + ({ \ + uint64x2_t b_ = (b); \ + uint32x2_t a_ = (a); \ + uint32x4_t result = vcombine_u32 \ + (a_, vcreate_u32 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("uqrshrn2 %0.4s, %1.2d, #%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vqrshrun_high_n_s16(a, b, c) \ + __extension__ \ + ({ \ + int16x8_t b_ = (b); \ + uint8x8_t a_ = (a); \ + uint8x16_t result = vcombine_u8 \ + (a_, vcreate_u8 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("sqrshrun2 %0.16b, %1.8h, #%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vqrshrun_high_n_s32(a, b, c) \ + __extension__ \ + ({ \ + int32x4_t b_ = (b); \ + uint16x4_t a_ = (a); \ + uint16x8_t result = vcombine_u16 \ + (a_, vcreate_u16 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("sqrshrun2 %0.8h, %1.4s, #%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vqrshrun_high_n_s64(a, b, c) \ + __extension__ \ + ({ \ + int64x2_t b_ = (b); \ + uint32x2_t a_ = (a); \ + uint32x4_t result = vcombine_u32 \ + (a_, vcreate_u32 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("sqrshrun2 %0.4s, %1.2d, #%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vqshrn_high_n_s16(a, b, c) \ + __extension__ \ + ({ \ + int16x8_t b_ = (b); \ + int8x8_t a_ = (a); \ + int8x16_t result = vcombine_s8 \ + (a_, vcreate_s8 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("sqshrn2 %0.16b, %1.8h, #%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vqshrn_high_n_s32(a, b, c) \ + __extension__ \ + ({ \ + int32x4_t b_ = (b); \ + int16x4_t a_ = (a); \ + int16x8_t result = vcombine_s16 \ + (a_, vcreate_s16 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("sqshrn2 %0.8h, %1.4s, #%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vqshrn_high_n_s64(a, b, c) \ + __extension__ \ + ({ \ + int64x2_t b_ = (b); \ + int32x2_t a_ = (a); \ + int32x4_t result = vcombine_s32 \ + (a_, vcreate_s32 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("sqshrn2 %0.4s, %1.2d, #%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vqshrn_high_n_u16(a, b, c) \ + __extension__ \ + ({ \ + uint16x8_t b_ = (b); \ + uint8x8_t a_ = (a); \ + uint8x16_t result = vcombine_u8 \ + (a_, vcreate_u8 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("uqshrn2 %0.16b, %1.8h, #%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vqshrn_high_n_u32(a, b, c) \ + __extension__ \ + ({ \ + uint32x4_t b_ = (b); \ + uint16x4_t a_ = (a); \ + uint16x8_t result = vcombine_u16 \ + (a_, vcreate_u16 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("uqshrn2 %0.8h, %1.4s, #%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vqshrn_high_n_u64(a, b, c) \ + __extension__ \ + ({ \ + uint64x2_t b_ = (b); \ + uint32x2_t a_ = (a); \ + uint32x4_t result = vcombine_u32 \ + (a_, vcreate_u32 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("uqshrn2 %0.4s, %1.2d, #%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vqshrun_high_n_s16(a, b, c) \ + __extension__ \ + ({ \ + int16x8_t b_ = (b); \ + uint8x8_t a_ = (a); \ + uint8x16_t result = vcombine_u8 \ + (a_, vcreate_u8 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("sqshrun2 %0.16b, %1.8h, #%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vqshrun_high_n_s32(a, b, c) \ + __extension__ \ + ({ \ + int32x4_t b_ = (b); \ + uint16x4_t a_ = (a); \ + uint16x8_t result = vcombine_u16 \ + (a_, vcreate_u16 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("sqshrun2 %0.8h, %1.4s, #%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vqshrun_high_n_s64(a, b, c) \ + __extension__ \ + ({ \ + int64x2_t b_ = (b); \ + uint32x2_t a_ = (a); \ + uint32x4_t result = vcombine_u32 \ + (a_, vcreate_u32 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("sqshrun2 %0.4s, %1.2d, #%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vrbit_s8 (int8x8_t a) +{ + int8x8_t result; + __asm__ ("rbit %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vrbit_u8 (uint8x8_t a) +{ + uint8x8_t result; + __asm__ ("rbit %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vrbitq_s8 (int8x16_t a) +{ + int8x16_t result; + __asm__ ("rbit %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vrbitq_u8 (uint8x16_t a) +{ + uint8x16_t result; + __asm__ ("rbit %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vrecpe_u32 (uint32x2_t a) +{ + uint32x2_t result; + __asm__ ("urecpe %0.2s,%1.2s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vrecpeq_u32 (uint32x4_t a) +{ + uint32x4_t result; + __asm__ ("urecpe %0.4s,%1.4s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vrev16_p8 (poly8x8_t a) +{ + poly8x8_t result; + __asm__ ("rev16 %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vrev16_s8 (int8x8_t a) +{ + int8x8_t result; + __asm__ ("rev16 %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vrev16_u8 (uint8x8_t a) +{ + uint8x8_t result; + __asm__ ("rev16 %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vrev16q_p8 (poly8x16_t a) +{ + poly8x16_t result; + __asm__ ("rev16 %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vrev16q_s8 (int8x16_t a) +{ + int8x16_t result; + __asm__ ("rev16 %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vrev16q_u8 (uint8x16_t a) +{ + uint8x16_t result; + __asm__ ("rev16 %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vrev32_p8 (poly8x8_t a) +{ + poly8x8_t result; + __asm__ ("rev32 %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vrev32_p16 (poly16x4_t a) +{ + poly16x4_t result; + __asm__ ("rev32 %0.4h,%1.4h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vrev32_s8 (int8x8_t a) +{ + int8x8_t result; + __asm__ ("rev32 %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vrev32_s16 (int16x4_t a) +{ + int16x4_t result; + __asm__ ("rev32 %0.4h,%1.4h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vrev32_u8 (uint8x8_t a) +{ + uint8x8_t result; + __asm__ ("rev32 %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vrev32_u16 (uint16x4_t a) +{ + uint16x4_t result; + __asm__ ("rev32 %0.4h,%1.4h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vrev32q_p8 (poly8x16_t a) +{ + poly8x16_t result; + __asm__ ("rev32 %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vrev32q_p16 (poly16x8_t a) +{ + poly16x8_t result; + __asm__ ("rev32 %0.8h,%1.8h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vrev32q_s8 (int8x16_t a) +{ + int8x16_t result; + __asm__ ("rev32 %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vrev32q_s16 (int16x8_t a) +{ + int16x8_t result; + __asm__ ("rev32 %0.8h,%1.8h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vrev32q_u8 (uint8x16_t a) +{ + uint8x16_t result; + __asm__ ("rev32 %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vrev32q_u16 (uint16x8_t a) +{ + uint16x8_t result; + __asm__ ("rev32 %0.8h,%1.8h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vrev64_f32 (float32x2_t a) +{ + float32x2_t result; + __asm__ ("rev64 %0.2s,%1.2s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vrev64_p8 (poly8x8_t a) +{ + poly8x8_t result; + __asm__ ("rev64 %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vrev64_p16 (poly16x4_t a) +{ + poly16x4_t result; + __asm__ ("rev64 %0.4h,%1.4h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vrev64_s8 (int8x8_t a) +{ + int8x8_t result; + __asm__ ("rev64 %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vrev64_s16 (int16x4_t a) +{ + int16x4_t result; + __asm__ ("rev64 %0.4h,%1.4h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vrev64_s32 (int32x2_t a) +{ + int32x2_t result; + __asm__ ("rev64 %0.2s,%1.2s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vrev64_u8 (uint8x8_t a) +{ + uint8x8_t result; + __asm__ ("rev64 %0.8b,%1.8b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vrev64_u16 (uint16x4_t a) +{ + uint16x4_t result; + __asm__ ("rev64 %0.4h,%1.4h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vrev64_u32 (uint32x2_t a) +{ + uint32x2_t result; + __asm__ ("rev64 %0.2s,%1.2s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vrev64q_f32 (float32x4_t a) +{ + float32x4_t result; + __asm__ ("rev64 %0.4s,%1.4s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vrev64q_p8 (poly8x16_t a) +{ + poly8x16_t result; + __asm__ ("rev64 %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vrev64q_p16 (poly16x8_t a) +{ + poly16x8_t result; + __asm__ ("rev64 %0.8h,%1.8h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vrev64q_s8 (int8x16_t a) +{ + int8x16_t result; + __asm__ ("rev64 %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vrev64q_s16 (int16x8_t a) +{ + int16x8_t result; + __asm__ ("rev64 %0.8h,%1.8h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vrev64q_s32 (int32x4_t a) +{ + int32x4_t result; + __asm__ ("rev64 %0.4s,%1.4s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vrev64q_u8 (uint8x16_t a) +{ + uint8x16_t result; + __asm__ ("rev64 %0.16b,%1.16b" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vrev64q_u16 (uint16x8_t a) +{ + uint16x8_t result; + __asm__ ("rev64 %0.8h,%1.8h" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vrev64q_u32 (uint32x4_t a) +{ + uint32x4_t result; + __asm__ ("rev64 %0.4s,%1.4s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +#define vrshrn_high_n_s16(a, b, c) \ + __extension__ \ + ({ \ + int16x8_t b_ = (b); \ + int8x8_t a_ = (a); \ + int8x16_t result = vcombine_s8 \ + (a_, vcreate_s8 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("rshrn2 %0.16b,%1.8h,#%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vrshrn_high_n_s32(a, b, c) \ + __extension__ \ + ({ \ + int32x4_t b_ = (b); \ + int16x4_t a_ = (a); \ + int16x8_t result = vcombine_s16 \ + (a_, vcreate_s16 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("rshrn2 %0.8h,%1.4s,#%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vrshrn_high_n_s64(a, b, c) \ + __extension__ \ + ({ \ + int64x2_t b_ = (b); \ + int32x2_t a_ = (a); \ + int32x4_t result = vcombine_s32 \ + (a_, vcreate_s32 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("rshrn2 %0.4s,%1.2d,#%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vrshrn_high_n_u16(a, b, c) \ + __extension__ \ + ({ \ + uint16x8_t b_ = (b); \ + uint8x8_t a_ = (a); \ + uint8x16_t result = vcombine_u8 \ + (a_, vcreate_u8 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("rshrn2 %0.16b,%1.8h,#%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vrshrn_high_n_u32(a, b, c) \ + __extension__ \ + ({ \ + uint32x4_t b_ = (b); \ + uint16x4_t a_ = (a); \ + uint16x8_t result = vcombine_u16 \ + (a_, vcreate_u16 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("rshrn2 %0.8h,%1.4s,#%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vrshrn_high_n_u64(a, b, c) \ + __extension__ \ + ({ \ + uint64x2_t b_ = (b); \ + uint32x2_t a_ = (a); \ + uint32x4_t result = vcombine_u32 \ + (a_, vcreate_u32 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("rshrn2 %0.4s,%1.2d,#%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vrshrn_n_s16(a, b) \ + __extension__ \ + ({ \ + int16x8_t a_ = (a); \ + int8x8_t result; \ + __asm__ ("rshrn %0.8b,%1.8h,%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vrshrn_n_s32(a, b) \ + __extension__ \ + ({ \ + int32x4_t a_ = (a); \ + int16x4_t result; \ + __asm__ ("rshrn %0.4h,%1.4s,%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vrshrn_n_s64(a, b) \ + __extension__ \ + ({ \ + int64x2_t a_ = (a); \ + int32x2_t result; \ + __asm__ ("rshrn %0.2s,%1.2d,%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vrshrn_n_u16(a, b) \ + __extension__ \ + ({ \ + uint16x8_t a_ = (a); \ + uint8x8_t result; \ + __asm__ ("rshrn %0.8b,%1.8h,%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vrshrn_n_u32(a, b) \ + __extension__ \ + ({ \ + uint32x4_t a_ = (a); \ + uint16x4_t result; \ + __asm__ ("rshrn %0.4h,%1.4s,%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vrshrn_n_u64(a, b) \ + __extension__ \ + ({ \ + uint64x2_t a_ = (a); \ + uint32x2_t result; \ + __asm__ ("rshrn %0.2s,%1.2d,%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vrsqrte_f32 (float32x2_t a) +{ + float32x2_t result; + __asm__ ("frsqrte %0.2s,%1.2s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x1_t __attribute__ ((__always_inline__)) +vrsqrte_f64 (float64x1_t a) +{ + float64x1_t result; + __asm__ ("frsqrte %d0,%d1" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vrsqrte_u32 (uint32x2_t a) +{ + uint32x2_t result; + __asm__ ("ursqrte %0.2s,%1.2s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vrsqrted_f64 (float64_t a) +{ + float64_t result; + __asm__ ("frsqrte %d0,%d1" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vrsqrteq_f32 (float32x4_t a) +{ + float32x4_t result; + __asm__ ("frsqrte %0.4s,%1.4s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vrsqrteq_f64 (float64x2_t a) +{ + float64x2_t result; + __asm__ ("frsqrte %0.2d,%1.2d" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vrsqrteq_u32 (uint32x4_t a) +{ + uint32x4_t result; + __asm__ ("ursqrte %0.4s,%1.4s" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vrsqrtes_f32 (float32_t a) +{ + float32_t result; + __asm__ ("frsqrte %s0,%s1" + : "=w"(result) + : "w"(a) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vrsqrts_f32 (float32x2_t a, float32x2_t b) +{ + float32x2_t result; + __asm__ ("frsqrts %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vrsqrtsd_f64 (float64_t a, float64_t b) +{ + float64_t result; + __asm__ ("frsqrts %d0,%d1,%d2" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vrsqrtsq_f32 (float32x4_t a, float32x4_t b) +{ + float32x4_t result; + __asm__ ("frsqrts %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vrsqrtsq_f64 (float64x2_t a, float64x2_t b) +{ + float64x2_t result; + __asm__ ("frsqrts %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vrsqrtss_f32 (float32_t a, float32_t b) +{ + float32_t result; + __asm__ ("frsqrts %s0,%s1,%s2" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vrsrtsq_f64 (float64x2_t a, float64x2_t b) +{ + float64x2_t result; + __asm__ ("frsqrts %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vrsubhn_high_s16 (int8x8_t a, int16x8_t b, int16x8_t c) +{ + int8x16_t result = vcombine_s8 (a, vcreate_s8 (__AARCH64_UINT64_C (0x0))); + __asm__ ("rsubhn2 %0.16b, %1.8h, %2.8h" + : "+w"(result) + : "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vrsubhn_high_s32 (int16x4_t a, int32x4_t b, int32x4_t c) +{ + int16x8_t result = vcombine_s16 (a, vcreate_s16 (__AARCH64_UINT64_C (0x0))); + __asm__ ("rsubhn2 %0.8h, %1.4s, %2.4s" + : "+w"(result) + : "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vrsubhn_high_s64 (int32x2_t a, int64x2_t b, int64x2_t c) +{ + int32x4_t result = vcombine_s32 (a, vcreate_s32 (__AARCH64_UINT64_C (0x0))); + __asm__ ("rsubhn2 %0.4s, %1.2d, %2.2d" + : "+w"(result) + : "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vrsubhn_high_u16 (uint8x8_t a, uint16x8_t b, uint16x8_t c) +{ + uint8x16_t result = vcombine_u8 (a, vcreate_u8 (__AARCH64_UINT64_C (0x0))); + __asm__ ("rsubhn2 %0.16b, %1.8h, %2.8h" + : "+w"(result) + : "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vrsubhn_high_u32 (uint16x4_t a, uint32x4_t b, uint32x4_t c) +{ + uint16x8_t result = vcombine_u16 (a, vcreate_u16 (__AARCH64_UINT64_C (0x0))); + __asm__ ("rsubhn2 %0.8h, %1.4s, %2.4s" + : "+w"(result) + : "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vrsubhn_high_u64 (uint32x2_t a, uint64x2_t b, uint64x2_t c) +{ + uint32x4_t result = vcombine_u32 (a, vcreate_u32 (__AARCH64_UINT64_C (0x0))); + __asm__ ("rsubhn2 %0.4s, %1.2d, %2.2d" + : "+w"(result) + : "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vrsubhn_s16 (int16x8_t a, int16x8_t b) +{ + int8x8_t result; + __asm__ ("rsubhn %0.8b, %1.8h, %2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vrsubhn_s32 (int32x4_t a, int32x4_t b) +{ + int16x4_t result; + __asm__ ("rsubhn %0.4h, %1.4s, %2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vrsubhn_s64 (int64x2_t a, int64x2_t b) +{ + int32x2_t result; + __asm__ ("rsubhn %0.2s, %1.2d, %2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vrsubhn_u16 (uint16x8_t a, uint16x8_t b) +{ + uint8x8_t result; + __asm__ ("rsubhn %0.8b, %1.8h, %2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vrsubhn_u32 (uint32x4_t a, uint32x4_t b) +{ + uint16x4_t result; + __asm__ ("rsubhn %0.4h, %1.4s, %2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vrsubhn_u64 (uint64x2_t a, uint64x2_t b) +{ + uint32x2_t result; + __asm__ ("rsubhn %0.2s, %1.2d, %2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +#define vset_lane_f32(a, b, c) \ + __extension__ \ + ({ \ + float32x2_t b_ = (b); \ + float32_t a_ = (a); \ + float32x2_t result; \ + __asm__ ("ins %0.s[%3], %w1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vset_lane_f64(a, b, c) \ + __extension__ \ + ({ \ + float64x1_t b_ = (b); \ + float64_t a_ = (a); \ + float64x1_t result; \ + __asm__ ("ins %0.d[%3], %x1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vset_lane_p8(a, b, c) \ + __extension__ \ + ({ \ + poly8x8_t b_ = (b); \ + poly8_t a_ = (a); \ + poly8x8_t result; \ + __asm__ ("ins %0.b[%3], %w1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vset_lane_p16(a, b, c) \ + __extension__ \ + ({ \ + poly16x4_t b_ = (b); \ + poly16_t a_ = (a); \ + poly16x4_t result; \ + __asm__ ("ins %0.h[%3], %w1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vset_lane_s8(a, b, c) \ + __extension__ \ + ({ \ + int8x8_t b_ = (b); \ + int8_t a_ = (a); \ + int8x8_t result; \ + __asm__ ("ins %0.b[%3], %w1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vset_lane_s16(a, b, c) \ + __extension__ \ + ({ \ + int16x4_t b_ = (b); \ + int16_t a_ = (a); \ + int16x4_t result; \ + __asm__ ("ins %0.h[%3], %w1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vset_lane_s32(a, b, c) \ + __extension__ \ + ({ \ + int32x2_t b_ = (b); \ + int32_t a_ = (a); \ + int32x2_t result; \ + __asm__ ("ins %0.s[%3], %w1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vset_lane_s64(a, b, c) \ + __extension__ \ + ({ \ + int64x1_t b_ = (b); \ + int64_t a_ = (a); \ + int64x1_t result; \ + __asm__ ("ins %0.d[%3], %x1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vset_lane_u8(a, b, c) \ + __extension__ \ + ({ \ + uint8x8_t b_ = (b); \ + uint8_t a_ = (a); \ + uint8x8_t result; \ + __asm__ ("ins %0.b[%3], %w1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vset_lane_u16(a, b, c) \ + __extension__ \ + ({ \ + uint16x4_t b_ = (b); \ + uint16_t a_ = (a); \ + uint16x4_t result; \ + __asm__ ("ins %0.h[%3], %w1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vset_lane_u32(a, b, c) \ + __extension__ \ + ({ \ + uint32x2_t b_ = (b); \ + uint32_t a_ = (a); \ + uint32x2_t result; \ + __asm__ ("ins %0.s[%3], %w1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vset_lane_u64(a, b, c) \ + __extension__ \ + ({ \ + uint64x1_t b_ = (b); \ + uint64_t a_ = (a); \ + uint64x1_t result; \ + __asm__ ("ins %0.d[%3], %x1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vsetq_lane_f32(a, b, c) \ + __extension__ \ + ({ \ + float32x4_t b_ = (b); \ + float32_t a_ = (a); \ + float32x4_t result; \ + __asm__ ("ins %0.s[%3], %w1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vsetq_lane_f64(a, b, c) \ + __extension__ \ + ({ \ + float64x2_t b_ = (b); \ + float64_t a_ = (a); \ + float64x2_t result; \ + __asm__ ("ins %0.d[%3], %x1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vsetq_lane_p8(a, b, c) \ + __extension__ \ + ({ \ + poly8x16_t b_ = (b); \ + poly8_t a_ = (a); \ + poly8x16_t result; \ + __asm__ ("ins %0.b[%3], %w1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vsetq_lane_p16(a, b, c) \ + __extension__ \ + ({ \ + poly16x8_t b_ = (b); \ + poly16_t a_ = (a); \ + poly16x8_t result; \ + __asm__ ("ins %0.h[%3], %w1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vsetq_lane_s8(a, b, c) \ + __extension__ \ + ({ \ + int8x16_t b_ = (b); \ + int8_t a_ = (a); \ + int8x16_t result; \ + __asm__ ("ins %0.b[%3], %w1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vsetq_lane_s16(a, b, c) \ + __extension__ \ + ({ \ + int16x8_t b_ = (b); \ + int16_t a_ = (a); \ + int16x8_t result; \ + __asm__ ("ins %0.h[%3], %w1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vsetq_lane_s32(a, b, c) \ + __extension__ \ + ({ \ + int32x4_t b_ = (b); \ + int32_t a_ = (a); \ + int32x4_t result; \ + __asm__ ("ins %0.s[%3], %w1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vsetq_lane_s64(a, b, c) \ + __extension__ \ + ({ \ + int64x2_t b_ = (b); \ + int64_t a_ = (a); \ + int64x2_t result; \ + __asm__ ("ins %0.d[%3], %x1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vsetq_lane_u8(a, b, c) \ + __extension__ \ + ({ \ + uint8x16_t b_ = (b); \ + uint8_t a_ = (a); \ + uint8x16_t result; \ + __asm__ ("ins %0.b[%3], %w1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vsetq_lane_u16(a, b, c) \ + __extension__ \ + ({ \ + uint16x8_t b_ = (b); \ + uint16_t a_ = (a); \ + uint16x8_t result; \ + __asm__ ("ins %0.h[%3], %w1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vsetq_lane_u32(a, b, c) \ + __extension__ \ + ({ \ + uint32x4_t b_ = (b); \ + uint32_t a_ = (a); \ + uint32x4_t result; \ + __asm__ ("ins %0.s[%3], %w1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vsetq_lane_u64(a, b, c) \ + __extension__ \ + ({ \ + uint64x2_t b_ = (b); \ + uint64_t a_ = (a); \ + uint64x2_t result; \ + __asm__ ("ins %0.d[%3], %x1" \ + : "=w"(result) \ + : "r"(a_), "0"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vshrn_high_n_s16(a, b, c) \ + __extension__ \ + ({ \ + int16x8_t b_ = (b); \ + int8x8_t a_ = (a); \ + int8x16_t result = vcombine_s8 \ + (a_, vcreate_s8 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("shrn2 %0.16b,%1.8h,#%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vshrn_high_n_s32(a, b, c) \ + __extension__ \ + ({ \ + int32x4_t b_ = (b); \ + int16x4_t a_ = (a); \ + int16x8_t result = vcombine_s16 \ + (a_, vcreate_s16 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("shrn2 %0.8h,%1.4s,#%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vshrn_high_n_s64(a, b, c) \ + __extension__ \ + ({ \ + int64x2_t b_ = (b); \ + int32x2_t a_ = (a); \ + int32x4_t result = vcombine_s32 \ + (a_, vcreate_s32 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("shrn2 %0.4s,%1.2d,#%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vshrn_high_n_u16(a, b, c) \ + __extension__ \ + ({ \ + uint16x8_t b_ = (b); \ + uint8x8_t a_ = (a); \ + uint8x16_t result = vcombine_u8 \ + (a_, vcreate_u8 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("shrn2 %0.16b,%1.8h,#%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vshrn_high_n_u32(a, b, c) \ + __extension__ \ + ({ \ + uint32x4_t b_ = (b); \ + uint16x4_t a_ = (a); \ + uint16x8_t result = vcombine_u16 \ + (a_, vcreate_u16 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("shrn2 %0.8h,%1.4s,#%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vshrn_high_n_u64(a, b, c) \ + __extension__ \ + ({ \ + uint64x2_t b_ = (b); \ + uint32x2_t a_ = (a); \ + uint32x4_t result = vcombine_u32 \ + (a_, vcreate_u32 \ + (__AARCH64_UINT64_C (0x0))); \ + __asm__ ("shrn2 %0.4s,%1.2d,#%2" \ + : "+w"(result) \ + : "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vshrn_n_s16(a, b) \ + __extension__ \ + ({ \ + int16x8_t a_ = (a); \ + int8x8_t result; \ + __asm__ ("shrn %0.8b,%1.8h,%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vshrn_n_s32(a, b) \ + __extension__ \ + ({ \ + int32x4_t a_ = (a); \ + int16x4_t result; \ + __asm__ ("shrn %0.4h,%1.4s,%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vshrn_n_s64(a, b) \ + __extension__ \ + ({ \ + int64x2_t a_ = (a); \ + int32x2_t result; \ + __asm__ ("shrn %0.2s,%1.2d,%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vshrn_n_u16(a, b) \ + __extension__ \ + ({ \ + uint16x8_t a_ = (a); \ + uint8x8_t result; \ + __asm__ ("shrn %0.8b,%1.8h,%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vshrn_n_u32(a, b) \ + __extension__ \ + ({ \ + uint32x4_t a_ = (a); \ + uint16x4_t result; \ + __asm__ ("shrn %0.4h,%1.4s,%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vshrn_n_u64(a, b) \ + __extension__ \ + ({ \ + uint64x2_t a_ = (a); \ + uint32x2_t result; \ + __asm__ ("shrn %0.2s,%1.2d,%2" \ + : "=w"(result) \ + : "w"(a_), "i"(b) \ + : /* No clobbers */); \ + result; \ + }) + +#define vsli_n_p8(a, b, c) \ + __extension__ \ + ({ \ + poly8x8_t b_ = (b); \ + poly8x8_t a_ = (a); \ + poly8x8_t result; \ + __asm__ ("sli %0.8b,%2.8b,%3" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vsli_n_p16(a, b, c) \ + __extension__ \ + ({ \ + poly16x4_t b_ = (b); \ + poly16x4_t a_ = (a); \ + poly16x4_t result; \ + __asm__ ("sli %0.4h,%2.4h,%3" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vsliq_n_p8(a, b, c) \ + __extension__ \ + ({ \ + poly8x16_t b_ = (b); \ + poly8x16_t a_ = (a); \ + poly8x16_t result; \ + __asm__ ("sli %0.16b,%2.16b,%3" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vsliq_n_p16(a, b, c) \ + __extension__ \ + ({ \ + poly16x8_t b_ = (b); \ + poly16x8_t a_ = (a); \ + poly16x8_t result; \ + __asm__ ("sli %0.8h,%2.8h,%3" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vsri_n_p8(a, b, c) \ + __extension__ \ + ({ \ + poly8x8_t b_ = (b); \ + poly8x8_t a_ = (a); \ + poly8x8_t result; \ + __asm__ ("sri %0.8b,%2.8b,%3" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vsri_n_p16(a, b, c) \ + __extension__ \ + ({ \ + poly16x4_t b_ = (b); \ + poly16x4_t a_ = (a); \ + poly16x4_t result; \ + __asm__ ("sri %0.4h,%2.4h,%3" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vsriq_n_p8(a, b, c) \ + __extension__ \ + ({ \ + poly8x16_t b_ = (b); \ + poly8x16_t a_ = (a); \ + poly8x16_t result; \ + __asm__ ("sri %0.16b,%2.16b,%3" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vsriq_n_p16(a, b, c) \ + __extension__ \ + ({ \ + poly16x8_t b_ = (b); \ + poly16x8_t a_ = (a); \ + poly16x8_t result; \ + __asm__ ("sri %0.8h,%2.8h,%3" \ + : "=w"(result) \ + : "0"(a_), "w"(b_), "i"(c) \ + : /* No clobbers */); \ + result; \ + }) + +#define vst1_lane_f32(a, b, c) \ + __extension__ \ + ({ \ + float32x2_t b_ = (b); \ + float32_t * a_ = (a); \ + __asm__ ("st1 {%1.s}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1_lane_f64(a, b, c) \ + __extension__ \ + ({ \ + float64x1_t b_ = (b); \ + float64_t * a_ = (a); \ + __asm__ ("st1 {%1.d}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1_lane_p8(a, b, c) \ + __extension__ \ + ({ \ + poly8x8_t b_ = (b); \ + poly8_t * a_ = (a); \ + __asm__ ("st1 {%1.b}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1_lane_p16(a, b, c) \ + __extension__ \ + ({ \ + poly16x4_t b_ = (b); \ + poly16_t * a_ = (a); \ + __asm__ ("st1 {%1.h}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1_lane_s8(a, b, c) \ + __extension__ \ + ({ \ + int8x8_t b_ = (b); \ + int8_t * a_ = (a); \ + __asm__ ("st1 {%1.b}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1_lane_s16(a, b, c) \ + __extension__ \ + ({ \ + int16x4_t b_ = (b); \ + int16_t * a_ = (a); \ + __asm__ ("st1 {%1.h}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1_lane_s32(a, b, c) \ + __extension__ \ + ({ \ + int32x2_t b_ = (b); \ + int32_t * a_ = (a); \ + __asm__ ("st1 {%1.s}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1_lane_s64(a, b, c) \ + __extension__ \ + ({ \ + int64x1_t b_ = (b); \ + int64_t * a_ = (a); \ + __asm__ ("st1 {%1.d}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1_lane_u8(a, b, c) \ + __extension__ \ + ({ \ + uint8x8_t b_ = (b); \ + uint8_t * a_ = (a); \ + __asm__ ("st1 {%1.b}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1_lane_u16(a, b, c) \ + __extension__ \ + ({ \ + uint16x4_t b_ = (b); \ + uint16_t * a_ = (a); \ + __asm__ ("st1 {%1.h}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1_lane_u32(a, b, c) \ + __extension__ \ + ({ \ + uint32x2_t b_ = (b); \ + uint32_t * a_ = (a); \ + __asm__ ("st1 {%1.s}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1_lane_u64(a, b, c) \ + __extension__ \ + ({ \ + uint64x1_t b_ = (b); \ + uint64_t * a_ = (a); \ + __asm__ ("st1 {%1.d}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + + +#define vst1q_lane_f32(a, b, c) \ + __extension__ \ + ({ \ + float32x4_t b_ = (b); \ + float32_t * a_ = (a); \ + __asm__ ("st1 {%1.s}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1q_lane_f64(a, b, c) \ + __extension__ \ + ({ \ + float64x2_t b_ = (b); \ + float64_t * a_ = (a); \ + __asm__ ("st1 {%1.d}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1q_lane_p8(a, b, c) \ + __extension__ \ + ({ \ + poly8x16_t b_ = (b); \ + poly8_t * a_ = (a); \ + __asm__ ("st1 {%1.b}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1q_lane_p16(a, b, c) \ + __extension__ \ + ({ \ + poly16x8_t b_ = (b); \ + poly16_t * a_ = (a); \ + __asm__ ("st1 {%1.h}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1q_lane_s8(a, b, c) \ + __extension__ \ + ({ \ + int8x16_t b_ = (b); \ + int8_t * a_ = (a); \ + __asm__ ("st1 {%1.b}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1q_lane_s16(a, b, c) \ + __extension__ \ + ({ \ + int16x8_t b_ = (b); \ + int16_t * a_ = (a); \ + __asm__ ("st1 {%1.h}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1q_lane_s32(a, b, c) \ + __extension__ \ + ({ \ + int32x4_t b_ = (b); \ + int32_t * a_ = (a); \ + __asm__ ("st1 {%1.s}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1q_lane_s64(a, b, c) \ + __extension__ \ + ({ \ + int64x2_t b_ = (b); \ + int64_t * a_ = (a); \ + __asm__ ("st1 {%1.d}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1q_lane_u8(a, b, c) \ + __extension__ \ + ({ \ + uint8x16_t b_ = (b); \ + uint8_t * a_ = (a); \ + __asm__ ("st1 {%1.b}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1q_lane_u16(a, b, c) \ + __extension__ \ + ({ \ + uint16x8_t b_ = (b); \ + uint16_t * a_ = (a); \ + __asm__ ("st1 {%1.h}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1q_lane_u32(a, b, c) \ + __extension__ \ + ({ \ + uint32x4_t b_ = (b); \ + uint32_t * a_ = (a); \ + __asm__ ("st1 {%1.s}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +#define vst1q_lane_u64(a, b, c) \ + __extension__ \ + ({ \ + uint64x2_t b_ = (b); \ + uint64_t * a_ = (a); \ + __asm__ ("st1 {%1.d}[%2],[%0]" \ + : \ + : "r"(a_), "w"(b_), "i"(c) \ + : "memory"); \ + }) + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vsubhn_high_s16 (int8x8_t a, int16x8_t b, int16x8_t c) +{ + int8x16_t result = vcombine_s8 (a, vcreate_s8 (__AARCH64_UINT64_C (0x0))); + __asm__ ("subhn2 %0.16b, %1.8h, %2.8h" + : "+w"(result) + : "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vsubhn_high_s32 (int16x4_t a, int32x4_t b, int32x4_t c) +{ + int16x8_t result = vcombine_s16 (a, vcreate_s16 (__AARCH64_UINT64_C (0x0))); + __asm__ ("subhn2 %0.8h, %1.4s, %2.4s" + : "+w"(result) + : "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vsubhn_high_s64 (int32x2_t a, int64x2_t b, int64x2_t c) +{ + int32x4_t result = vcombine_s32 (a, vcreate_s32 (__AARCH64_UINT64_C (0x0))); + __asm__ ("subhn2 %0.4s, %1.2d, %2.2d" + : "+w"(result) + : "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vsubhn_high_u16 (uint8x8_t a, uint16x8_t b, uint16x8_t c) +{ + uint8x16_t result = vcombine_u8 (a, vcreate_u8 (__AARCH64_UINT64_C (0x0))); + __asm__ ("subhn2 %0.16b, %1.8h, %2.8h" + : "+w"(result) + : "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vsubhn_high_u32 (uint16x4_t a, uint32x4_t b, uint32x4_t c) +{ + uint16x8_t result = vcombine_u16 (a, vcreate_u16 (__AARCH64_UINT64_C (0x0))); + __asm__ ("subhn2 %0.8h, %1.4s, %2.4s" + : "+w"(result) + : "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vsubhn_high_u64 (uint32x2_t a, uint64x2_t b, uint64x2_t c) +{ + uint32x4_t result = vcombine_u32 (a, vcreate_u32 (__AARCH64_UINT64_C (0x0))); + __asm__ ("subhn2 %0.4s, %1.2d, %2.2d" + : "+w"(result) + : "w"(b), "w"(c) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vsubhn_s16 (int16x8_t a, int16x8_t b) +{ + int8x8_t result; + __asm__ ("subhn %0.8b, %1.8h, %2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vsubhn_s32 (int32x4_t a, int32x4_t b) +{ + int16x4_t result; + __asm__ ("subhn %0.4h, %1.4s, %2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vsubhn_s64 (int64x2_t a, int64x2_t b) +{ + int32x2_t result; + __asm__ ("subhn %0.2s, %1.2d, %2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vsubhn_u16 (uint16x8_t a, uint16x8_t b) +{ + uint8x8_t result; + __asm__ ("subhn %0.8b, %1.8h, %2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vsubhn_u32 (uint32x4_t a, uint32x4_t b) +{ + uint16x4_t result; + __asm__ ("subhn %0.4h, %1.4s, %2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vsubhn_u64 (uint64x2_t a, uint64x2_t b) +{ + uint32x2_t result; + __asm__ ("subhn %0.2s, %1.2d, %2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vtrn1_f32 (float32x2_t a, float32x2_t b) +{ + float32x2_t result; + __asm__ ("trn1 %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vtrn1_p8 (poly8x8_t a, poly8x8_t b) +{ + poly8x8_t result; + __asm__ ("trn1 %0.8b,%1.8b,%2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vtrn1_p16 (poly16x4_t a, poly16x4_t b) +{ + poly16x4_t result; + __asm__ ("trn1 %0.4h,%1.4h,%2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vtrn1_s8 (int8x8_t a, int8x8_t b) +{ + int8x8_t result; + __asm__ ("trn1 %0.8b,%1.8b,%2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vtrn1_s16 (int16x4_t a, int16x4_t b) +{ + int16x4_t result; + __asm__ ("trn1 %0.4h,%1.4h,%2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vtrn1_s32 (int32x2_t a, int32x2_t b) +{ + int32x2_t result; + __asm__ ("trn1 %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vtrn1_u8 (uint8x8_t a, uint8x8_t b) +{ + uint8x8_t result; + __asm__ ("trn1 %0.8b,%1.8b,%2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vtrn1_u16 (uint16x4_t a, uint16x4_t b) +{ + uint16x4_t result; + __asm__ ("trn1 %0.4h,%1.4h,%2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vtrn1_u32 (uint32x2_t a, uint32x2_t b) +{ + uint32x2_t result; + __asm__ ("trn1 %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vtrn1q_f32 (float32x4_t a, float32x4_t b) +{ + float32x4_t result; + __asm__ ("trn1 %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vtrn1q_f64 (float64x2_t a, float64x2_t b) +{ + float64x2_t result; + __asm__ ("trn1 %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vtrn1q_p8 (poly8x16_t a, poly8x16_t b) +{ + poly8x16_t result; + __asm__ ("trn1 %0.16b,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vtrn1q_p16 (poly16x8_t a, poly16x8_t b) +{ + poly16x8_t result; + __asm__ ("trn1 %0.8h,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vtrn1q_s8 (int8x16_t a, int8x16_t b) +{ + int8x16_t result; + __asm__ ("trn1 %0.16b,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vtrn1q_s16 (int16x8_t a, int16x8_t b) +{ + int16x8_t result; + __asm__ ("trn1 %0.8h,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vtrn1q_s32 (int32x4_t a, int32x4_t b) +{ + int32x4_t result; + __asm__ ("trn1 %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vtrn1q_s64 (int64x2_t a, int64x2_t b) +{ + int64x2_t result; + __asm__ ("trn1 %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vtrn1q_u8 (uint8x16_t a, uint8x16_t b) +{ + uint8x16_t result; + __asm__ ("trn1 %0.16b,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vtrn1q_u16 (uint16x8_t a, uint16x8_t b) +{ + uint16x8_t result; + __asm__ ("trn1 %0.8h,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vtrn1q_u32 (uint32x4_t a, uint32x4_t b) +{ + uint32x4_t result; + __asm__ ("trn1 %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vtrn1q_u64 (uint64x2_t a, uint64x2_t b) +{ + uint64x2_t result; + __asm__ ("trn1 %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vtrn2_f32 (float32x2_t a, float32x2_t b) +{ + float32x2_t result; + __asm__ ("trn2 %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vtrn2_p8 (poly8x8_t a, poly8x8_t b) +{ + poly8x8_t result; + __asm__ ("trn2 %0.8b,%1.8b,%2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vtrn2_p16 (poly16x4_t a, poly16x4_t b) +{ + poly16x4_t result; + __asm__ ("trn2 %0.4h,%1.4h,%2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vtrn2_s8 (int8x8_t a, int8x8_t b) +{ + int8x8_t result; + __asm__ ("trn2 %0.8b,%1.8b,%2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vtrn2_s16 (int16x4_t a, int16x4_t b) +{ + int16x4_t result; + __asm__ ("trn2 %0.4h,%1.4h,%2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vtrn2_s32 (int32x2_t a, int32x2_t b) +{ + int32x2_t result; + __asm__ ("trn2 %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vtrn2_u8 (uint8x8_t a, uint8x8_t b) +{ + uint8x8_t result; + __asm__ ("trn2 %0.8b,%1.8b,%2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vtrn2_u16 (uint16x4_t a, uint16x4_t b) +{ + uint16x4_t result; + __asm__ ("trn2 %0.4h,%1.4h,%2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vtrn2_u32 (uint32x2_t a, uint32x2_t b) +{ + uint32x2_t result; + __asm__ ("trn2 %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vtrn2q_f32 (float32x4_t a, float32x4_t b) +{ + float32x4_t result; + __asm__ ("trn2 %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vtrn2q_f64 (float64x2_t a, float64x2_t b) +{ + float64x2_t result; + __asm__ ("trn2 %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vtrn2q_p8 (poly8x16_t a, poly8x16_t b) +{ + poly8x16_t result; + __asm__ ("trn2 %0.16b,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vtrn2q_p16 (poly16x8_t a, poly16x8_t b) +{ + poly16x8_t result; + __asm__ ("trn2 %0.8h,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vtrn2q_s8 (int8x16_t a, int8x16_t b) +{ + int8x16_t result; + __asm__ ("trn2 %0.16b,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vtrn2q_s16 (int16x8_t a, int16x8_t b) +{ + int16x8_t result; + __asm__ ("trn2 %0.8h,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vtrn2q_s32 (int32x4_t a, int32x4_t b) +{ + int32x4_t result; + __asm__ ("trn2 %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vtrn2q_s64 (int64x2_t a, int64x2_t b) +{ + int64x2_t result; + __asm__ ("trn2 %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vtrn2q_u8 (uint8x16_t a, uint8x16_t b) +{ + uint8x16_t result; + __asm__ ("trn2 %0.16b,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vtrn2q_u16 (uint16x8_t a, uint16x8_t b) +{ + uint16x8_t result; + __asm__ ("trn2 %0.8h,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vtrn2q_u32 (uint32x4_t a, uint32x4_t b) +{ + uint32x4_t result; + __asm__ ("trn2 %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vtrn2q_u64 (uint64x2_t a, uint64x2_t b) +{ + uint64x2_t result; + __asm__ ("trn2 %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vtst_p8 (poly8x8_t a, poly8x8_t b) +{ + uint8x8_t result; + __asm__ ("cmtst %0.8b, %1.8b, %2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vtst_p16 (poly16x4_t a, poly16x4_t b) +{ + uint16x4_t result; + __asm__ ("cmtst %0.4h, %1.4h, %2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vtstq_p8 (poly8x16_t a, poly8x16_t b) +{ + uint8x16_t result; + __asm__ ("cmtst %0.16b, %1.16b, %2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vtstq_p16 (poly16x8_t a, poly16x8_t b) +{ + uint16x8_t result; + __asm__ ("cmtst %0.8h, %1.8h, %2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vuzp1_f32 (float32x2_t a, float32x2_t b) +{ + float32x2_t result; + __asm__ ("uzp1 %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vuzp1_p8 (poly8x8_t a, poly8x8_t b) +{ + poly8x8_t result; + __asm__ ("uzp1 %0.8b,%1.8b,%2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vuzp1_p16 (poly16x4_t a, poly16x4_t b) +{ + poly16x4_t result; + __asm__ ("uzp1 %0.4h,%1.4h,%2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vuzp1_s8 (int8x8_t a, int8x8_t b) +{ + int8x8_t result; + __asm__ ("uzp1 %0.8b,%1.8b,%2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vuzp1_s16 (int16x4_t a, int16x4_t b) +{ + int16x4_t result; + __asm__ ("uzp1 %0.4h,%1.4h,%2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vuzp1_s32 (int32x2_t a, int32x2_t b) +{ + int32x2_t result; + __asm__ ("uzp1 %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vuzp1_u8 (uint8x8_t a, uint8x8_t b) +{ + uint8x8_t result; + __asm__ ("uzp1 %0.8b,%1.8b,%2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vuzp1_u16 (uint16x4_t a, uint16x4_t b) +{ + uint16x4_t result; + __asm__ ("uzp1 %0.4h,%1.4h,%2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vuzp1_u32 (uint32x2_t a, uint32x2_t b) +{ + uint32x2_t result; + __asm__ ("uzp1 %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vuzp1q_f32 (float32x4_t a, float32x4_t b) +{ + float32x4_t result; + __asm__ ("uzp1 %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vuzp1q_f64 (float64x2_t a, float64x2_t b) +{ + float64x2_t result; + __asm__ ("uzp1 %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vuzp1q_p8 (poly8x16_t a, poly8x16_t b) +{ + poly8x16_t result; + __asm__ ("uzp1 %0.16b,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vuzp1q_p16 (poly16x8_t a, poly16x8_t b) +{ + poly16x8_t result; + __asm__ ("uzp1 %0.8h,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vuzp1q_s8 (int8x16_t a, int8x16_t b) +{ + int8x16_t result; + __asm__ ("uzp1 %0.16b,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vuzp1q_s16 (int16x8_t a, int16x8_t b) +{ + int16x8_t result; + __asm__ ("uzp1 %0.8h,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vuzp1q_s32 (int32x4_t a, int32x4_t b) +{ + int32x4_t result; + __asm__ ("uzp1 %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vuzp1q_s64 (int64x2_t a, int64x2_t b) +{ + int64x2_t result; + __asm__ ("uzp1 %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vuzp1q_u8 (uint8x16_t a, uint8x16_t b) +{ + uint8x16_t result; + __asm__ ("uzp1 %0.16b,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vuzp1q_u16 (uint16x8_t a, uint16x8_t b) +{ + uint16x8_t result; + __asm__ ("uzp1 %0.8h,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vuzp1q_u32 (uint32x4_t a, uint32x4_t b) +{ + uint32x4_t result; + __asm__ ("uzp1 %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vuzp1q_u64 (uint64x2_t a, uint64x2_t b) +{ + uint64x2_t result; + __asm__ ("uzp1 %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vuzp2_f32 (float32x2_t a, float32x2_t b) +{ + float32x2_t result; + __asm__ ("uzp2 %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vuzp2_p8 (poly8x8_t a, poly8x8_t b) +{ + poly8x8_t result; + __asm__ ("uzp2 %0.8b,%1.8b,%2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vuzp2_p16 (poly16x4_t a, poly16x4_t b) +{ + poly16x4_t result; + __asm__ ("uzp2 %0.4h,%1.4h,%2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vuzp2_s8 (int8x8_t a, int8x8_t b) +{ + int8x8_t result; + __asm__ ("uzp2 %0.8b,%1.8b,%2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vuzp2_s16 (int16x4_t a, int16x4_t b) +{ + int16x4_t result; + __asm__ ("uzp2 %0.4h,%1.4h,%2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vuzp2_s32 (int32x2_t a, int32x2_t b) +{ + int32x2_t result; + __asm__ ("uzp2 %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vuzp2_u8 (uint8x8_t a, uint8x8_t b) +{ + uint8x8_t result; + __asm__ ("uzp2 %0.8b,%1.8b,%2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vuzp2_u16 (uint16x4_t a, uint16x4_t b) +{ + uint16x4_t result; + __asm__ ("uzp2 %0.4h,%1.4h,%2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vuzp2_u32 (uint32x2_t a, uint32x2_t b) +{ + uint32x2_t result; + __asm__ ("uzp2 %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vuzp2q_f32 (float32x4_t a, float32x4_t b) +{ + float32x4_t result; + __asm__ ("uzp2 %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vuzp2q_f64 (float64x2_t a, float64x2_t b) +{ + float64x2_t result; + __asm__ ("uzp2 %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vuzp2q_p8 (poly8x16_t a, poly8x16_t b) +{ + poly8x16_t result; + __asm__ ("uzp2 %0.16b,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vuzp2q_p16 (poly16x8_t a, poly16x8_t b) +{ + poly16x8_t result; + __asm__ ("uzp2 %0.8h,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vuzp2q_s8 (int8x16_t a, int8x16_t b) +{ + int8x16_t result; + __asm__ ("uzp2 %0.16b,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vuzp2q_s16 (int16x8_t a, int16x8_t b) +{ + int16x8_t result; + __asm__ ("uzp2 %0.8h,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vuzp2q_s32 (int32x4_t a, int32x4_t b) +{ + int32x4_t result; + __asm__ ("uzp2 %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vuzp2q_s64 (int64x2_t a, int64x2_t b) +{ + int64x2_t result; + __asm__ ("uzp2 %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vuzp2q_u8 (uint8x16_t a, uint8x16_t b) +{ + uint8x16_t result; + __asm__ ("uzp2 %0.16b,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vuzp2q_u16 (uint16x8_t a, uint16x8_t b) +{ + uint16x8_t result; + __asm__ ("uzp2 %0.8h,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vuzp2q_u32 (uint32x4_t a, uint32x4_t b) +{ + uint32x4_t result; + __asm__ ("uzp2 %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vuzp2q_u64 (uint64x2_t a, uint64x2_t b) +{ + uint64x2_t result; + __asm__ ("uzp2 %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vzip1_f32 (float32x2_t a, float32x2_t b) +{ + float32x2_t result; + __asm__ ("zip1 %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vzip1_p8 (poly8x8_t a, poly8x8_t b) +{ + poly8x8_t result; + __asm__ ("zip1 %0.8b,%1.8b,%2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vzip1_p16 (poly16x4_t a, poly16x4_t b) +{ + poly16x4_t result; + __asm__ ("zip1 %0.4h,%1.4h,%2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vzip1_s8 (int8x8_t a, int8x8_t b) +{ + int8x8_t result; + __asm__ ("zip1 %0.8b,%1.8b,%2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vzip1_s16 (int16x4_t a, int16x4_t b) +{ + int16x4_t result; + __asm__ ("zip1 %0.4h,%1.4h,%2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vzip1_s32 (int32x2_t a, int32x2_t b) +{ + int32x2_t result; + __asm__ ("zip1 %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vzip1_u8 (uint8x8_t a, uint8x8_t b) +{ + uint8x8_t result; + __asm__ ("zip1 %0.8b,%1.8b,%2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vzip1_u16 (uint16x4_t a, uint16x4_t b) +{ + uint16x4_t result; + __asm__ ("zip1 %0.4h,%1.4h,%2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vzip1_u32 (uint32x2_t a, uint32x2_t b) +{ + uint32x2_t result; + __asm__ ("zip1 %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vzip1q_f32 (float32x4_t a, float32x4_t b) +{ + float32x4_t result; + __asm__ ("zip1 %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vzip1q_f64 (float64x2_t a, float64x2_t b) +{ + float64x2_t result; + __asm__ ("zip1 %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vzip1q_p8 (poly8x16_t a, poly8x16_t b) +{ + poly8x16_t result; + __asm__ ("zip1 %0.16b,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vzip1q_p16 (poly16x8_t a, poly16x8_t b) +{ + poly16x8_t result; + __asm__ ("zip1 %0.8h,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vzip1q_s8 (int8x16_t a, int8x16_t b) +{ + int8x16_t result; + __asm__ ("zip1 %0.16b,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vzip1q_s16 (int16x8_t a, int16x8_t b) +{ + int16x8_t result; + __asm__ ("zip1 %0.8h,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vzip1q_s32 (int32x4_t a, int32x4_t b) +{ + int32x4_t result; + __asm__ ("zip1 %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vzip1q_s64 (int64x2_t a, int64x2_t b) +{ + int64x2_t result; + __asm__ ("zip1 %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vzip1q_u8 (uint8x16_t a, uint8x16_t b) +{ + uint8x16_t result; + __asm__ ("zip1 %0.16b,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vzip1q_u16 (uint16x8_t a, uint16x8_t b) +{ + uint16x8_t result; + __asm__ ("zip1 %0.8h,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vzip1q_u32 (uint32x4_t a, uint32x4_t b) +{ + uint32x4_t result; + __asm__ ("zip1 %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vzip1q_u64 (uint64x2_t a, uint64x2_t b) +{ + uint64x2_t result; + __asm__ ("zip1 %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vzip2_f32 (float32x2_t a, float32x2_t b) +{ + float32x2_t result; + __asm__ ("zip2 %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vzip2_p8 (poly8x8_t a, poly8x8_t b) +{ + poly8x8_t result; + __asm__ ("zip2 %0.8b,%1.8b,%2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vzip2_p16 (poly16x4_t a, poly16x4_t b) +{ + poly16x4_t result; + __asm__ ("zip2 %0.4h,%1.4h,%2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vzip2_s8 (int8x8_t a, int8x8_t b) +{ + int8x8_t result; + __asm__ ("zip2 %0.8b,%1.8b,%2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vzip2_s16 (int16x4_t a, int16x4_t b) +{ + int16x4_t result; + __asm__ ("zip2 %0.4h,%1.4h,%2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vzip2_s32 (int32x2_t a, int32x2_t b) +{ + int32x2_t result; + __asm__ ("zip2 %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vzip2_u8 (uint8x8_t a, uint8x8_t b) +{ + uint8x8_t result; + __asm__ ("zip2 %0.8b,%1.8b,%2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vzip2_u16 (uint16x4_t a, uint16x4_t b) +{ + uint16x4_t result; + __asm__ ("zip2 %0.4h,%1.4h,%2.4h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vzip2_u32 (uint32x2_t a, uint32x2_t b) +{ + uint32x2_t result; + __asm__ ("zip2 %0.2s,%1.2s,%2.2s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vzip2q_f32 (float32x4_t a, float32x4_t b) +{ + float32x4_t result; + __asm__ ("zip2 %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vzip2q_f64 (float64x2_t a, float64x2_t b) +{ + float64x2_t result; + __asm__ ("zip2 %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vzip2q_p8 (poly8x16_t a, poly8x16_t b) +{ + poly8x16_t result; + __asm__ ("zip2 %0.16b,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vzip2q_p16 (poly16x8_t a, poly16x8_t b) +{ + poly16x8_t result; + __asm__ ("zip2 %0.8h,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vzip2q_s8 (int8x16_t a, int8x16_t b) +{ + int8x16_t result; + __asm__ ("zip2 %0.16b,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vzip2q_s16 (int16x8_t a, int16x8_t b) +{ + int16x8_t result; + __asm__ ("zip2 %0.8h,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vzip2q_s32 (int32x4_t a, int32x4_t b) +{ + int32x4_t result; + __asm__ ("zip2 %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vzip2q_s64 (int64x2_t a, int64x2_t b) +{ + int64x2_t result; + __asm__ ("zip2 %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vzip2q_u8 (uint8x16_t a, uint8x16_t b) +{ + uint8x16_t result; + __asm__ ("zip2 %0.16b,%1.16b,%2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vzip2q_u16 (uint16x8_t a, uint16x8_t b) +{ + uint16x8_t result; + __asm__ ("zip2 %0.8h,%1.8h,%2.8h" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vzip2q_u32 (uint32x4_t a, uint32x4_t b) +{ + uint32x4_t result; + __asm__ ("zip2 %0.4s,%1.4s,%2.4s" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vzip2q_u64 (uint64x2_t a, uint64x2_t b) +{ + uint64x2_t result; + __asm__ ("zip2 %0.2d,%1.2d,%2.2d" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +/* End of temporary inline asm implementations. */ + +/* Start of temporary inline asm for vldn, vstn and friends. */ + +/* Create struct element types for duplicating loads. + + Create 2 element structures of: + + +------+----+----+----+----+ + | | 8 | 16 | 32 | 64 | + +------+----+----+----+----+ + |int | Y | Y | N | N | + +------+----+----+----+----+ + |uint | Y | Y | N | N | + +------+----+----+----+----+ + |float | - | - | N | N | + +------+----+----+----+----+ + |poly | Y | Y | - | - | + +------+----+----+----+----+ + + Create 3 element structures of: + + +------+----+----+----+----+ + | | 8 | 16 | 32 | 64 | + +------+----+----+----+----+ + |int | Y | Y | Y | Y | + +------+----+----+----+----+ + |uint | Y | Y | Y | Y | + +------+----+----+----+----+ + |float | - | - | Y | Y | + +------+----+----+----+----+ + |poly | Y | Y | - | - | + +------+----+----+----+----+ + + Create 4 element structures of: + + +------+----+----+----+----+ + | | 8 | 16 | 32 | 64 | + +------+----+----+----+----+ + |int | Y | N | N | Y | + +------+----+----+----+----+ + |uint | Y | N | N | Y | + +------+----+----+----+----+ + |float | - | - | N | Y | + +------+----+----+----+----+ + |poly | Y | N | - | - | + +------+----+----+----+----+ + + This is required for casting memory reference. */ +#define __STRUCTN(t, sz, nelem) \ + typedef struct t ## sz ## x ## nelem ## _t { \ + t ## sz ## _t val[nelem]; \ + } t ## sz ## x ## nelem ## _t; + +/* 2-element structs. */ +__STRUCTN (int, 8, 2) +__STRUCTN (int, 16, 2) +__STRUCTN (uint, 8, 2) +__STRUCTN (uint, 16, 2) +__STRUCTN (poly, 8, 2) +__STRUCTN (poly, 16, 2) +/* 3-element structs. */ +__STRUCTN (int, 8, 3) +__STRUCTN (int, 16, 3) +__STRUCTN (int, 32, 3) +__STRUCTN (int, 64, 3) +__STRUCTN (uint, 8, 3) +__STRUCTN (uint, 16, 3) +__STRUCTN (uint, 32, 3) +__STRUCTN (uint, 64, 3) +__STRUCTN (float, 32, 3) +__STRUCTN (float, 64, 3) +__STRUCTN (poly, 8, 3) +__STRUCTN (poly, 16, 3) +/* 4-element structs. */ +__STRUCTN (int, 8, 4) +__STRUCTN (int, 64, 4) +__STRUCTN (uint, 8, 4) +__STRUCTN (uint, 64, 4) +__STRUCTN (poly, 8, 4) +__STRUCTN (float, 64, 4) +#undef __STRUCTN + +#define __LD2R_FUNC(rettype, structtype, ptrtype, \ + regsuffix, funcsuffix, Q) \ + __extension__ static __inline rettype \ + __attribute__ ((__always_inline__)) \ + vld2 ## Q ## _dup_ ## funcsuffix (const ptrtype *ptr) \ + { \ + rettype result; \ + __asm__ ("ld2r {v16." #regsuffix ", v17." #regsuffix "}, %1\n\t" \ + "st1 {v16." #regsuffix ", v17." #regsuffix "}, %0\n\t" \ + : "=Q"(result) \ + : "Q"(*(const structtype *)ptr) \ + : "memory", "v16", "v17"); \ + return result; \ + } + +__LD2R_FUNC (float32x2x2_t, float32x2_t, float32_t, 2s, f32,) +__LD2R_FUNC (float64x1x2_t, float64x2_t, float64_t, 1d, f64,) +__LD2R_FUNC (poly8x8x2_t, poly8x2_t, poly8_t, 8b, p8,) +__LD2R_FUNC (poly16x4x2_t, poly16x2_t, poly16_t, 4h, p16,) +__LD2R_FUNC (int8x8x2_t, int8x2_t, int8_t, 8b, s8,) +__LD2R_FUNC (int16x4x2_t, int16x2_t, int16_t, 4h, s16,) +__LD2R_FUNC (int32x2x2_t, int32x2_t, int32_t, 2s, s32,) +__LD2R_FUNC (int64x1x2_t, int64x2_t, int64_t, 1d, s64,) +__LD2R_FUNC (uint8x8x2_t, uint8x2_t, uint8_t, 8b, u8,) +__LD2R_FUNC (uint16x4x2_t, uint16x2_t, uint16_t, 4h, u16,) +__LD2R_FUNC (uint32x2x2_t, uint32x2_t, uint32_t, 2s, u32,) +__LD2R_FUNC (uint64x1x2_t, uint64x2_t, uint64_t, 1d, u64,) +__LD2R_FUNC (float32x4x2_t, float32x2_t, float32_t, 4s, f32, q) +__LD2R_FUNC (float64x2x2_t, float64x2_t, float64_t, 2d, f64, q) +__LD2R_FUNC (poly8x16x2_t, poly8x2_t, poly8_t, 16b, p8, q) +__LD2R_FUNC (poly16x8x2_t, poly16x2_t, poly16_t, 8h, p16, q) +__LD2R_FUNC (int8x16x2_t, int8x2_t, int8_t, 16b, s8, q) +__LD2R_FUNC (int16x8x2_t, int16x2_t, int16_t, 8h, s16, q) +__LD2R_FUNC (int32x4x2_t, int32x2_t, int32_t, 4s, s32, q) +__LD2R_FUNC (int64x2x2_t, int64x2_t, int64_t, 2d, s64, q) +__LD2R_FUNC (uint8x16x2_t, uint8x2_t, uint8_t, 16b, u8, q) +__LD2R_FUNC (uint16x8x2_t, uint16x2_t, uint16_t, 8h, u16, q) +__LD2R_FUNC (uint32x4x2_t, uint32x2_t, uint32_t, 4s, u32, q) +__LD2R_FUNC (uint64x2x2_t, uint64x2_t, uint64_t, 2d, u64, q) + +#define __LD2_LANE_FUNC(rettype, ptrtype, regsuffix, \ + lnsuffix, funcsuffix, Q) \ + __extension__ static __inline rettype \ + __attribute__ ((__always_inline__)) \ + vld2 ## Q ## _lane_ ## funcsuffix (const ptrtype *ptr, \ + rettype b, const int c) \ + { \ + rettype result; \ + __asm__ ("ld1 {v16." #regsuffix ", v17." #regsuffix "}, %1\n\t" \ + "ld2 {v16." #lnsuffix ", v17." #lnsuffix "}[%3], %2\n\t" \ + "st1 {v16." #regsuffix ", v17." #regsuffix "}, %0\n\t" \ + : "=Q"(result) \ + : "Q"(b), "Q"(*(const rettype *)ptr), "i"(c) \ + : "memory", "v16", "v17"); \ + return result; \ + } + +__LD2_LANE_FUNC (int8x8x2_t, uint8_t, 8b, b, s8,) +__LD2_LANE_FUNC (float32x2x2_t, float32_t, 2s, s, f32,) +__LD2_LANE_FUNC (float64x1x2_t, float64_t, 1d, d, f64,) +__LD2_LANE_FUNC (poly8x8x2_t, poly8_t, 8b, b, p8,) +__LD2_LANE_FUNC (poly16x4x2_t, poly16_t, 4h, h, p16,) +__LD2_LANE_FUNC (int16x4x2_t, int16_t, 4h, h, s16,) +__LD2_LANE_FUNC (int32x2x2_t, int32_t, 2s, s, s32,) +__LD2_LANE_FUNC (int64x1x2_t, int64_t, 1d, d, s64,) +__LD2_LANE_FUNC (uint8x8x2_t, uint8_t, 8b, b, u8,) +__LD2_LANE_FUNC (uint16x4x2_t, uint16_t, 4h, h, u16,) +__LD2_LANE_FUNC (uint32x2x2_t, uint32_t, 2s, s, u32,) +__LD2_LANE_FUNC (uint64x1x2_t, uint64_t, 1d, d, u64,) +__LD2_LANE_FUNC (float32x4x2_t, float32_t, 4s, s, f32, q) +__LD2_LANE_FUNC (float64x2x2_t, float64_t, 2d, d, f64, q) +__LD2_LANE_FUNC (poly8x16x2_t, poly8_t, 16b, b, p8, q) +__LD2_LANE_FUNC (poly16x8x2_t, poly16_t, 8h, h, p16, q) +__LD2_LANE_FUNC (int8x16x2_t, int8_t, 16b, b, s8, q) +__LD2_LANE_FUNC (int16x8x2_t, int16_t, 8h, h, s16, q) +__LD2_LANE_FUNC (int32x4x2_t, int32_t, 4s, s, s32, q) +__LD2_LANE_FUNC (int64x2x2_t, int64_t, 2d, d, s64, q) +__LD2_LANE_FUNC (uint8x16x2_t, uint8_t, 16b, b, u8, q) +__LD2_LANE_FUNC (uint16x8x2_t, uint16_t, 8h, h, u16, q) +__LD2_LANE_FUNC (uint32x4x2_t, uint32_t, 4s, s, u32, q) +__LD2_LANE_FUNC (uint64x2x2_t, uint64_t, 2d, d, u64, q) + +#define __LD3R_FUNC(rettype, structtype, ptrtype, \ + regsuffix, funcsuffix, Q) \ + __extension__ static __inline rettype \ + __attribute__ ((__always_inline__)) \ + vld3 ## Q ## _dup_ ## funcsuffix (const ptrtype *ptr) \ + { \ + rettype result; \ + __asm__ ("ld3r {v16." #regsuffix " - v18." #regsuffix "}, %1\n\t" \ + "st1 {v16." #regsuffix " - v18." #regsuffix "}, %0\n\t" \ + : "=Q"(result) \ + : "Q"(*(const structtype *)ptr) \ + : "memory", "v16", "v17", "v18"); \ + return result; \ + } + +__LD3R_FUNC (float32x2x3_t, float32x3_t, float32_t, 2s, f32,) +__LD3R_FUNC (float64x1x3_t, float64x3_t, float64_t, 1d, f64,) +__LD3R_FUNC (poly8x8x3_t, poly8x3_t, poly8_t, 8b, p8,) +__LD3R_FUNC (poly16x4x3_t, poly16x3_t, poly16_t, 4h, p16,) +__LD3R_FUNC (int8x8x3_t, int8x3_t, int8_t, 8b, s8,) +__LD3R_FUNC (int16x4x3_t, int16x3_t, int16_t, 4h, s16,) +__LD3R_FUNC (int32x2x3_t, int32x3_t, int32_t, 2s, s32,) +__LD3R_FUNC (int64x1x3_t, int64x3_t, int64_t, 1d, s64,) +__LD3R_FUNC (uint8x8x3_t, uint8x3_t, uint8_t, 8b, u8,) +__LD3R_FUNC (uint16x4x3_t, uint16x3_t, uint16_t, 4h, u16,) +__LD3R_FUNC (uint32x2x3_t, uint32x3_t, uint32_t, 2s, u32,) +__LD3R_FUNC (uint64x1x3_t, uint64x3_t, uint64_t, 1d, u64,) +__LD3R_FUNC (float32x4x3_t, float32x3_t, float32_t, 4s, f32, q) +__LD3R_FUNC (float64x2x3_t, float64x3_t, float64_t, 2d, f64, q) +__LD3R_FUNC (poly8x16x3_t, poly8x3_t, poly8_t, 16b, p8, q) +__LD3R_FUNC (poly16x8x3_t, poly16x3_t, poly16_t, 8h, p16, q) +__LD3R_FUNC (int8x16x3_t, int8x3_t, int8_t, 16b, s8, q) +__LD3R_FUNC (int16x8x3_t, int16x3_t, int16_t, 8h, s16, q) +__LD3R_FUNC (int32x4x3_t, int32x3_t, int32_t, 4s, s32, q) +__LD3R_FUNC (int64x2x3_t, int64x3_t, int64_t, 2d, s64, q) +__LD3R_FUNC (uint8x16x3_t, uint8x3_t, uint8_t, 16b, u8, q) +__LD3R_FUNC (uint16x8x3_t, uint16x3_t, uint16_t, 8h, u16, q) +__LD3R_FUNC (uint32x4x3_t, uint32x3_t, uint32_t, 4s, u32, q) +__LD3R_FUNC (uint64x2x3_t, uint64x3_t, uint64_t, 2d, u64, q) + +#define __LD3_LANE_FUNC(rettype, ptrtype, regsuffix, \ + lnsuffix, funcsuffix, Q) \ + __extension__ static __inline rettype \ + __attribute__ ((__always_inline__)) \ + vld3 ## Q ## _lane_ ## funcsuffix (const ptrtype *ptr, \ + rettype b, const int c) \ + { \ + rettype result; \ + __asm__ ("ld1 {v16." #regsuffix " - v18." #regsuffix "}, %1\n\t" \ + "ld3 {v16." #lnsuffix " - v18." #lnsuffix "}[%3], %2\n\t" \ + "st1 {v16." #regsuffix " - v18." #regsuffix "}, %0\n\t" \ + : "=Q"(result) \ + : "Q"(b), "Q"(*(const rettype *)ptr), "i"(c) \ + : "memory", "v16", "v17", "v18"); \ + return result; \ + } + +__LD3_LANE_FUNC (int8x8x3_t, uint8_t, 8b, b, s8,) +__LD3_LANE_FUNC (float32x2x3_t, float32_t, 2s, s, f32,) +__LD3_LANE_FUNC (float64x1x3_t, float64_t, 1d, d, f64,) +__LD3_LANE_FUNC (poly8x8x3_t, poly8_t, 8b, b, p8,) +__LD3_LANE_FUNC (poly16x4x3_t, poly16_t, 4h, h, p16,) +__LD3_LANE_FUNC (int16x4x3_t, int16_t, 4h, h, s16,) +__LD3_LANE_FUNC (int32x2x3_t, int32_t, 2s, s, s32,) +__LD3_LANE_FUNC (int64x1x3_t, int64_t, 1d, d, s64,) +__LD3_LANE_FUNC (uint8x8x3_t, uint8_t, 8b, b, u8,) +__LD3_LANE_FUNC (uint16x4x3_t, uint16_t, 4h, h, u16,) +__LD3_LANE_FUNC (uint32x2x3_t, uint32_t, 2s, s, u32,) +__LD3_LANE_FUNC (uint64x1x3_t, uint64_t, 1d, d, u64,) +__LD3_LANE_FUNC (float32x4x3_t, float32_t, 4s, s, f32, q) +__LD3_LANE_FUNC (float64x2x3_t, float64_t, 2d, d, f64, q) +__LD3_LANE_FUNC (poly8x16x3_t, poly8_t, 16b, b, p8, q) +__LD3_LANE_FUNC (poly16x8x3_t, poly16_t, 8h, h, p16, q) +__LD3_LANE_FUNC (int8x16x3_t, int8_t, 16b, b, s8, q) +__LD3_LANE_FUNC (int16x8x3_t, int16_t, 8h, h, s16, q) +__LD3_LANE_FUNC (int32x4x3_t, int32_t, 4s, s, s32, q) +__LD3_LANE_FUNC (int64x2x3_t, int64_t, 2d, d, s64, q) +__LD3_LANE_FUNC (uint8x16x3_t, uint8_t, 16b, b, u8, q) +__LD3_LANE_FUNC (uint16x8x3_t, uint16_t, 8h, h, u16, q) +__LD3_LANE_FUNC (uint32x4x3_t, uint32_t, 4s, s, u32, q) +__LD3_LANE_FUNC (uint64x2x3_t, uint64_t, 2d, d, u64, q) + +#define __LD4R_FUNC(rettype, structtype, ptrtype, \ + regsuffix, funcsuffix, Q) \ + __extension__ static __inline rettype \ + __attribute__ ((__always_inline__)) \ + vld4 ## Q ## _dup_ ## funcsuffix (const ptrtype *ptr) \ + { \ + rettype result; \ + __asm__ ("ld4r {v16." #regsuffix " - v19." #regsuffix "}, %1\n\t" \ + "st1 {v16." #regsuffix " - v19." #regsuffix "}, %0\n\t" \ + : "=Q"(result) \ + : "Q"(*(const structtype *)ptr) \ + : "memory", "v16", "v17", "v18", "v19"); \ + return result; \ + } + +__LD4R_FUNC (float32x2x4_t, float32x4_t, float32_t, 2s, f32,) +__LD4R_FUNC (float64x1x4_t, float64x4_t, float64_t, 1d, f64,) +__LD4R_FUNC (poly8x8x4_t, poly8x4_t, poly8_t, 8b, p8,) +__LD4R_FUNC (poly16x4x4_t, poly16x4_t, poly16_t, 4h, p16,) +__LD4R_FUNC (int8x8x4_t, int8x4_t, int8_t, 8b, s8,) +__LD4R_FUNC (int16x4x4_t, int16x4_t, int16_t, 4h, s16,) +__LD4R_FUNC (int32x2x4_t, int32x4_t, int32_t, 2s, s32,) +__LD4R_FUNC (int64x1x4_t, int64x4_t, int64_t, 1d, s64,) +__LD4R_FUNC (uint8x8x4_t, uint8x4_t, uint8_t, 8b, u8,) +__LD4R_FUNC (uint16x4x4_t, uint16x4_t, uint16_t, 4h, u16,) +__LD4R_FUNC (uint32x2x4_t, uint32x4_t, uint32_t, 2s, u32,) +__LD4R_FUNC (uint64x1x4_t, uint64x4_t, uint64_t, 1d, u64,) +__LD4R_FUNC (float32x4x4_t, float32x4_t, float32_t, 4s, f32, q) +__LD4R_FUNC (float64x2x4_t, float64x4_t, float64_t, 2d, f64, q) +__LD4R_FUNC (poly8x16x4_t, poly8x4_t, poly8_t, 16b, p8, q) +__LD4R_FUNC (poly16x8x4_t, poly16x4_t, poly16_t, 8h, p16, q) +__LD4R_FUNC (int8x16x4_t, int8x4_t, int8_t, 16b, s8, q) +__LD4R_FUNC (int16x8x4_t, int16x4_t, int16_t, 8h, s16, q) +__LD4R_FUNC (int32x4x4_t, int32x4_t, int32_t, 4s, s32, q) +__LD4R_FUNC (int64x2x4_t, int64x4_t, int64_t, 2d, s64, q) +__LD4R_FUNC (uint8x16x4_t, uint8x4_t, uint8_t, 16b, u8, q) +__LD4R_FUNC (uint16x8x4_t, uint16x4_t, uint16_t, 8h, u16, q) +__LD4R_FUNC (uint32x4x4_t, uint32x4_t, uint32_t, 4s, u32, q) +__LD4R_FUNC (uint64x2x4_t, uint64x4_t, uint64_t, 2d, u64, q) + +#define __LD4_LANE_FUNC(rettype, ptrtype, regsuffix, \ + lnsuffix, funcsuffix, Q) \ + __extension__ static __inline rettype \ + __attribute__ ((__always_inline__)) \ + vld4 ## Q ## _lane_ ## funcsuffix (const ptrtype *ptr, \ + rettype b, const int c) \ + { \ + rettype result; \ + __asm__ ("ld1 {v16." #regsuffix " - v19." #regsuffix "}, %1\n\t" \ + "ld4 {v16." #lnsuffix " - v19." #lnsuffix "}[%3], %2\n\t" \ + "st1 {v16." #regsuffix " - v19." #regsuffix "}, %0\n\t" \ + : "=Q"(result) \ + : "Q"(b), "Q"(*(const rettype *)ptr), "i"(c) \ + : "memory", "v16", "v17", "v18", "v19"); \ + return result; \ + } + +__LD4_LANE_FUNC (int8x8x4_t, uint8_t, 8b, b, s8,) +__LD4_LANE_FUNC (float32x2x4_t, float32_t, 2s, s, f32,) +__LD4_LANE_FUNC (float64x1x4_t, float64_t, 1d, d, f64,) +__LD4_LANE_FUNC (poly8x8x4_t, poly8_t, 8b, b, p8,) +__LD4_LANE_FUNC (poly16x4x4_t, poly16_t, 4h, h, p16,) +__LD4_LANE_FUNC (int16x4x4_t, int16_t, 4h, h, s16,) +__LD4_LANE_FUNC (int32x2x4_t, int32_t, 2s, s, s32,) +__LD4_LANE_FUNC (int64x1x4_t, int64_t, 1d, d, s64,) +__LD4_LANE_FUNC (uint8x8x4_t, uint8_t, 8b, b, u8,) +__LD4_LANE_FUNC (uint16x4x4_t, uint16_t, 4h, h, u16,) +__LD4_LANE_FUNC (uint32x2x4_t, uint32_t, 2s, s, u32,) +__LD4_LANE_FUNC (uint64x1x4_t, uint64_t, 1d, d, u64,) +__LD4_LANE_FUNC (float32x4x4_t, float32_t, 4s, s, f32, q) +__LD4_LANE_FUNC (float64x2x4_t, float64_t, 2d, d, f64, q) +__LD4_LANE_FUNC (poly8x16x4_t, poly8_t, 16b, b, p8, q) +__LD4_LANE_FUNC (poly16x8x4_t, poly16_t, 8h, h, p16, q) +__LD4_LANE_FUNC (int8x16x4_t, int8_t, 16b, b, s8, q) +__LD4_LANE_FUNC (int16x8x4_t, int16_t, 8h, h, s16, q) +__LD4_LANE_FUNC (int32x4x4_t, int32_t, 4s, s, s32, q) +__LD4_LANE_FUNC (int64x2x4_t, int64_t, 2d, d, s64, q) +__LD4_LANE_FUNC (uint8x16x4_t, uint8_t, 16b, b, u8, q) +__LD4_LANE_FUNC (uint16x8x4_t, uint16_t, 8h, h, u16, q) +__LD4_LANE_FUNC (uint32x4x4_t, uint32_t, 4s, s, u32, q) +__LD4_LANE_FUNC (uint64x2x4_t, uint64_t, 2d, d, u64, q) + +#define __ST2_LANE_FUNC(intype, ptrtype, regsuffix, \ + lnsuffix, funcsuffix, Q) \ + typedef struct { ptrtype __x[2]; } __ST2_LANE_STRUCTURE_##intype; \ + __extension__ static __inline void \ + __attribute__ ((__always_inline__)) \ + vst2 ## Q ## _lane_ ## funcsuffix (ptrtype *ptr, \ + intype b, const int c) \ + { \ + __ST2_LANE_STRUCTURE_##intype *__p = \ + (__ST2_LANE_STRUCTURE_##intype *)ptr; \ + __asm__ ("ld1 {v16." #regsuffix ", v17." #regsuffix "}, %1\n\t" \ + "st2 {v16." #lnsuffix ", v17." #lnsuffix "}[%2], %0\n\t" \ + : "=Q"(*__p) \ + : "Q"(b), "i"(c) \ + : "v16", "v17"); \ + } + +__ST2_LANE_FUNC (int8x8x2_t, int8_t, 8b, b, s8,) +__ST2_LANE_FUNC (float32x2x2_t, float32_t, 2s, s, f32,) +__ST2_LANE_FUNC (float64x1x2_t, float64_t, 1d, d, f64,) +__ST2_LANE_FUNC (poly8x8x2_t, poly8_t, 8b, b, p8,) +__ST2_LANE_FUNC (poly16x4x2_t, poly16_t, 4h, h, p16,) +__ST2_LANE_FUNC (int16x4x2_t, int16_t, 4h, h, s16,) +__ST2_LANE_FUNC (int32x2x2_t, int32_t, 2s, s, s32,) +__ST2_LANE_FUNC (int64x1x2_t, int64_t, 1d, d, s64,) +__ST2_LANE_FUNC (uint8x8x2_t, uint8_t, 8b, b, u8,) +__ST2_LANE_FUNC (uint16x4x2_t, uint16_t, 4h, h, u16,) +__ST2_LANE_FUNC (uint32x2x2_t, uint32_t, 2s, s, u32,) +__ST2_LANE_FUNC (uint64x1x2_t, uint64_t, 1d, d, u64,) +__ST2_LANE_FUNC (float32x4x2_t, float32_t, 4s, s, f32, q) +__ST2_LANE_FUNC (float64x2x2_t, float64_t, 2d, d, f64, q) +__ST2_LANE_FUNC (poly8x16x2_t, poly8_t, 16b, b, p8, q) +__ST2_LANE_FUNC (poly16x8x2_t, poly16_t, 8h, h, p16, q) +__ST2_LANE_FUNC (int8x16x2_t, int8_t, 16b, b, s8, q) +__ST2_LANE_FUNC (int16x8x2_t, int16_t, 8h, h, s16, q) +__ST2_LANE_FUNC (int32x4x2_t, int32_t, 4s, s, s32, q) +__ST2_LANE_FUNC (int64x2x2_t, int64_t, 2d, d, s64, q) +__ST2_LANE_FUNC (uint8x16x2_t, uint8_t, 16b, b, u8, q) +__ST2_LANE_FUNC (uint16x8x2_t, uint16_t, 8h, h, u16, q) +__ST2_LANE_FUNC (uint32x4x2_t, uint32_t, 4s, s, u32, q) +__ST2_LANE_FUNC (uint64x2x2_t, uint64_t, 2d, d, u64, q) + +#define __ST3_LANE_FUNC(intype, ptrtype, regsuffix, \ + lnsuffix, funcsuffix, Q) \ + typedef struct { ptrtype __x[3]; } __ST3_LANE_STRUCTURE_##intype; \ + __extension__ static __inline void \ + __attribute__ ((__always_inline__)) \ + vst3 ## Q ## _lane_ ## funcsuffix (ptrtype *ptr, \ + intype b, const int c) \ + { \ + __ST3_LANE_STRUCTURE_##intype *__p = \ + (__ST3_LANE_STRUCTURE_##intype *)ptr; \ + __asm__ ("ld1 {v16." #regsuffix " - v18." #regsuffix "}, %1\n\t" \ + "st3 {v16." #lnsuffix " - v18." #lnsuffix "}[%2], %0\n\t" \ + : "=Q"(*__p) \ + : "Q"(b), "i"(c) \ + : "v16", "v17", "v18"); \ + } + +__ST3_LANE_FUNC (int8x8x3_t, int8_t, 8b, b, s8,) +__ST3_LANE_FUNC (float32x2x3_t, float32_t, 2s, s, f32,) +__ST3_LANE_FUNC (float64x1x3_t, float64_t, 1d, d, f64,) +__ST3_LANE_FUNC (poly8x8x3_t, poly8_t, 8b, b, p8,) +__ST3_LANE_FUNC (poly16x4x3_t, poly16_t, 4h, h, p16,) +__ST3_LANE_FUNC (int16x4x3_t, int16_t, 4h, h, s16,) +__ST3_LANE_FUNC (int32x2x3_t, int32_t, 2s, s, s32,) +__ST3_LANE_FUNC (int64x1x3_t, int64_t, 1d, d, s64,) +__ST3_LANE_FUNC (uint8x8x3_t, uint8_t, 8b, b, u8,) +__ST3_LANE_FUNC (uint16x4x3_t, uint16_t, 4h, h, u16,) +__ST3_LANE_FUNC (uint32x2x3_t, uint32_t, 2s, s, u32,) +__ST3_LANE_FUNC (uint64x1x3_t, uint64_t, 1d, d, u64,) +__ST3_LANE_FUNC (float32x4x3_t, float32_t, 4s, s, f32, q) +__ST3_LANE_FUNC (float64x2x3_t, float64_t, 2d, d, f64, q) +__ST3_LANE_FUNC (poly8x16x3_t, poly8_t, 16b, b, p8, q) +__ST3_LANE_FUNC (poly16x8x3_t, poly16_t, 8h, h, p16, q) +__ST3_LANE_FUNC (int8x16x3_t, int8_t, 16b, b, s8, q) +__ST3_LANE_FUNC (int16x8x3_t, int16_t, 8h, h, s16, q) +__ST3_LANE_FUNC (int32x4x3_t, int32_t, 4s, s, s32, q) +__ST3_LANE_FUNC (int64x2x3_t, int64_t, 2d, d, s64, q) +__ST3_LANE_FUNC (uint8x16x3_t, uint8_t, 16b, b, u8, q) +__ST3_LANE_FUNC (uint16x8x3_t, uint16_t, 8h, h, u16, q) +__ST3_LANE_FUNC (uint32x4x3_t, uint32_t, 4s, s, u32, q) +__ST3_LANE_FUNC (uint64x2x3_t, uint64_t, 2d, d, u64, q) + +#define __ST4_LANE_FUNC(intype, ptrtype, regsuffix, \ + lnsuffix, funcsuffix, Q) \ + typedef struct { ptrtype __x[4]; } __ST4_LANE_STRUCTURE_##intype; \ + __extension__ static __inline void \ + __attribute__ ((__always_inline__)) \ + vst4 ## Q ## _lane_ ## funcsuffix (ptrtype *ptr, \ + intype b, const int c) \ + { \ + __ST4_LANE_STRUCTURE_##intype *__p = \ + (__ST4_LANE_STRUCTURE_##intype *)ptr; \ + __asm__ ("ld1 {v16." #regsuffix " - v19." #regsuffix "}, %1\n\t" \ + "st4 {v16." #lnsuffix " - v19." #lnsuffix "}[%2], %0\n\t" \ + : "=Q"(*__p) \ + : "Q"(b), "i"(c) \ + : "v16", "v17", "v18", "v19"); \ + } + +__ST4_LANE_FUNC (int8x8x4_t, int8_t, 8b, b, s8,) +__ST4_LANE_FUNC (float32x2x4_t, float32_t, 2s, s, f32,) +__ST4_LANE_FUNC (float64x1x4_t, float64_t, 1d, d, f64,) +__ST4_LANE_FUNC (poly8x8x4_t, poly8_t, 8b, b, p8,) +__ST4_LANE_FUNC (poly16x4x4_t, poly16_t, 4h, h, p16,) +__ST4_LANE_FUNC (int16x4x4_t, int16_t, 4h, h, s16,) +__ST4_LANE_FUNC (int32x2x4_t, int32_t, 2s, s, s32,) +__ST4_LANE_FUNC (int64x1x4_t, int64_t, 1d, d, s64,) +__ST4_LANE_FUNC (uint8x8x4_t, uint8_t, 8b, b, u8,) +__ST4_LANE_FUNC (uint16x4x4_t, uint16_t, 4h, h, u16,) +__ST4_LANE_FUNC (uint32x2x4_t, uint32_t, 2s, s, u32,) +__ST4_LANE_FUNC (uint64x1x4_t, uint64_t, 1d, d, u64,) +__ST4_LANE_FUNC (float32x4x4_t, float32_t, 4s, s, f32, q) +__ST4_LANE_FUNC (float64x2x4_t, float64_t, 2d, d, f64, q) +__ST4_LANE_FUNC (poly8x16x4_t, poly8_t, 16b, b, p8, q) +__ST4_LANE_FUNC (poly16x8x4_t, poly16_t, 8h, h, p16, q) +__ST4_LANE_FUNC (int8x16x4_t, int8_t, 16b, b, s8, q) +__ST4_LANE_FUNC (int16x8x4_t, int16_t, 8h, h, s16, q) +__ST4_LANE_FUNC (int32x4x4_t, int32_t, 4s, s, s32, q) +__ST4_LANE_FUNC (int64x2x4_t, int64_t, 2d, d, s64, q) +__ST4_LANE_FUNC (uint8x16x4_t, uint8_t, 16b, b, u8, q) +__ST4_LANE_FUNC (uint16x8x4_t, uint16_t, 8h, h, u16, q) +__ST4_LANE_FUNC (uint32x4x4_t, uint32_t, 4s, s, u32, q) +__ST4_LANE_FUNC (uint64x2x4_t, uint64_t, 2d, d, u64, q) + +__extension__ static __inline int64_t __attribute__ ((__always_inline__)) +vaddlv_s32 (int32x2_t a) +{ + int64_t result; + __asm__ ("saddlp %0.1d, %1.2s" : "=w"(result) : "w"(a) : ); + return result; +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vaddlv_u32 (uint32x2_t a) +{ + uint64_t result; + __asm__ ("uaddlp %0.1d, %1.2s" : "=w"(result) : "w"(a) : ); + return result; +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vpaddd_s64 (int64x2_t __a) +{ + return __builtin_aarch64_addpdi (__a); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vqdmulh_laneq_s16 (int16x4_t __a, int16x8_t __b, const int __c) +{ + return __builtin_aarch64_sqdmulh_laneqv4hi (__a, __b, __c); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vqdmulh_laneq_s32 (int32x2_t __a, int32x4_t __b, const int __c) +{ + return __builtin_aarch64_sqdmulh_laneqv2si (__a, __b, __c); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vqdmulhq_laneq_s16 (int16x8_t __a, int16x8_t __b, const int __c) +{ + return __builtin_aarch64_sqdmulh_laneqv8hi (__a, __b, __c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmulhq_laneq_s32 (int32x4_t __a, int32x4_t __b, const int __c) +{ + return __builtin_aarch64_sqdmulh_laneqv4si (__a, __b, __c); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vqrdmulh_laneq_s16 (int16x4_t __a, int16x8_t __b, const int __c) +{ + return __builtin_aarch64_sqrdmulh_laneqv4hi (__a, __b, __c); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vqrdmulh_laneq_s32 (int32x2_t __a, int32x4_t __b, const int __c) +{ + return __builtin_aarch64_sqrdmulh_laneqv2si (__a, __b, __c); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vqrdmulhq_laneq_s16 (int16x8_t __a, int16x8_t __b, const int __c) +{ + return __builtin_aarch64_sqrdmulh_laneqv8hi (__a, __b, __c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqrdmulhq_laneq_s32 (int32x4_t __a, int32x4_t __b, const int __c) +{ + return __builtin_aarch64_sqrdmulh_laneqv4si (__a, __b, __c); +} + +/* Table intrinsics. */ + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vqtbl1_p8 (poly8x16_t a, uint8x8_t b) +{ + poly8x8_t result; + __asm__ ("tbl %0.8b, {%1.16b}, %2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vqtbl1_s8 (int8x16_t a, uint8x8_t b) +{ + int8x8_t result; + __asm__ ("tbl %0.8b, {%1.16b}, %2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vqtbl1_u8 (uint8x16_t a, uint8x8_t b) +{ + uint8x8_t result; + __asm__ ("tbl %0.8b, {%1.16b}, %2.8b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vqtbl1q_p8 (poly8x16_t a, uint8x16_t b) +{ + poly8x16_t result; + __asm__ ("tbl %0.16b, {%1.16b}, %2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vqtbl1q_s8 (int8x16_t a, uint8x16_t b) +{ + int8x16_t result; + __asm__ ("tbl %0.16b, {%1.16b}, %2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vqtbl1q_u8 (uint8x16_t a, uint8x16_t b) +{ + uint8x16_t result; + __asm__ ("tbl %0.16b, {%1.16b}, %2.16b" + : "=w"(result) + : "w"(a), "w"(b) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vqtbl2_s8 (int8x16x2_t tab, uint8x8_t idx) +{ + int8x8_t result; + __asm__ ("ld1 {v16.16b, v17.16b}, %1\n\t" + "tbl %0.8b, {v16.16b, v17.16b}, %2.8b\n\t" + :"=w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17"); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vqtbl2_u8 (uint8x16x2_t tab, uint8x8_t idx) +{ + uint8x8_t result; + __asm__ ("ld1 {v16.16b, v17.16b}, %1\n\t" + "tbl %0.8b, {v16.16b, v17.16b}, %2.8b\n\t" + :"=w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17"); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vqtbl2_p8 (poly8x16x2_t tab, uint8x8_t idx) +{ + poly8x8_t result; + __asm__ ("ld1 {v16.16b, v17.16b}, %1\n\t" + "tbl %0.8b, {v16.16b, v17.16b}, %2.8b\n\t" + :"=w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17"); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vqtbl2q_s8 (int8x16x2_t tab, uint8x16_t idx) +{ + int8x16_t result; + __asm__ ("ld1 {v16.16b, v17.16b}, %1\n\t" + "tbl %0.16b, {v16.16b, v17.16b}, %2.16b\n\t" + :"=w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17"); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vqtbl2q_u8 (uint8x16x2_t tab, uint8x16_t idx) +{ + uint8x16_t result; + __asm__ ("ld1 {v16.16b, v17.16b}, %1\n\t" + "tbl %0.16b, {v16.16b, v17.16b}, %2.16b\n\t" + :"=w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17"); + return result; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vqtbl2q_p8 (poly8x16x2_t tab, uint8x16_t idx) +{ + poly8x16_t result; + __asm__ ("ld1 {v16.16b, v17.16b}, %1\n\t" + "tbl %0.16b, {v16.16b, v17.16b}, %2.16b\n\t" + :"=w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17"); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vqtbl3_s8 (int8x16x3_t tab, uint8x8_t idx) +{ + int8x8_t result; + __asm__ ("ld1 {v16.16b - v18.16b}, %1\n\t" + "tbl %0.8b, {v16.16b - v18.16b}, %2.8b\n\t" + :"=w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18"); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vqtbl3_u8 (uint8x16x3_t tab, uint8x8_t idx) +{ + uint8x8_t result; + __asm__ ("ld1 {v16.16b - v18.16b}, %1\n\t" + "tbl %0.8b, {v16.16b - v18.16b}, %2.8b\n\t" + :"=w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18"); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vqtbl3_p8 (poly8x16x3_t tab, uint8x8_t idx) +{ + poly8x8_t result; + __asm__ ("ld1 {v16.16b - v18.16b}, %1\n\t" + "tbl %0.8b, {v16.16b - v18.16b}, %2.8b\n\t" + :"=w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18"); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vqtbl3q_s8 (int8x16x3_t tab, uint8x16_t idx) +{ + int8x16_t result; + __asm__ ("ld1 {v16.16b - v18.16b}, %1\n\t" + "tbl %0.16b, {v16.16b - v18.16b}, %2.16b\n\t" + :"=w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18"); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vqtbl3q_u8 (uint8x16x3_t tab, uint8x16_t idx) +{ + uint8x16_t result; + __asm__ ("ld1 {v16.16b - v18.16b}, %1\n\t" + "tbl %0.16b, {v16.16b - v18.16b}, %2.16b\n\t" + :"=w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18"); + return result; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vqtbl3q_p8 (poly8x16x3_t tab, uint8x16_t idx) +{ + poly8x16_t result; + __asm__ ("ld1 {v16.16b - v18.16b}, %1\n\t" + "tbl %0.16b, {v16.16b - v18.16b}, %2.16b\n\t" + :"=w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18"); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vqtbl4_s8 (int8x16x4_t tab, uint8x8_t idx) +{ + int8x8_t result; + __asm__ ("ld1 {v16.16b - v19.16b}, %1\n\t" + "tbl %0.8b, {v16.16b - v19.16b}, %2.8b\n\t" + :"=w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18", "v19"); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vqtbl4_u8 (uint8x16x4_t tab, uint8x8_t idx) +{ + uint8x8_t result; + __asm__ ("ld1 {v16.16b - v19.16b}, %1\n\t" + "tbl %0.8b, {v16.16b - v19.16b}, %2.8b\n\t" + :"=w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18", "v19"); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vqtbl4_p8 (poly8x16x4_t tab, uint8x8_t idx) +{ + poly8x8_t result; + __asm__ ("ld1 {v16.16b - v19.16b}, %1\n\t" + "tbl %0.8b, {v16.16b - v19.16b}, %2.8b\n\t" + :"=w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18", "v19"); + return result; +} + + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vqtbl4q_s8 (int8x16x4_t tab, uint8x16_t idx) +{ + int8x16_t result; + __asm__ ("ld1 {v16.16b - v19.16b}, %1\n\t" + "tbl %0.16b, {v16.16b - v19.16b}, %2.16b\n\t" + :"=w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18", "v19"); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vqtbl4q_u8 (uint8x16x4_t tab, uint8x16_t idx) +{ + uint8x16_t result; + __asm__ ("ld1 {v16.16b - v19.16b}, %1\n\t" + "tbl %0.16b, {v16.16b - v19.16b}, %2.16b\n\t" + :"=w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18", "v19"); + return result; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vqtbl4q_p8 (poly8x16x4_t tab, uint8x16_t idx) +{ + poly8x16_t result; + __asm__ ("ld1 {v16.16b - v19.16b}, %1\n\t" + "tbl %0.16b, {v16.16b - v19.16b}, %2.16b\n\t" + :"=w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18", "v19"); + return result; +} + + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vqtbx1_s8 (int8x8_t r, int8x16_t tab, uint8x8_t idx) +{ + int8x8_t result = r; + __asm__ ("tbx %0.8b,{%1.16b},%2.8b" + : "+w"(result) + : "w"(tab), "w"(idx) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vqtbx1_u8 (uint8x8_t r, uint8x16_t tab, uint8x8_t idx) +{ + uint8x8_t result = r; + __asm__ ("tbx %0.8b,{%1.16b},%2.8b" + : "+w"(result) + : "w"(tab), "w"(idx) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vqtbx1_p8 (poly8x8_t r, poly8x16_t tab, uint8x8_t idx) +{ + poly8x8_t result = r; + __asm__ ("tbx %0.8b,{%1.16b},%2.8b" + : "+w"(result) + : "w"(tab), "w"(idx) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vqtbx1q_s8 (int8x16_t r, int8x16_t tab, uint8x16_t idx) +{ + int8x16_t result = r; + __asm__ ("tbx %0.16b,{%1.16b},%2.16b" + : "+w"(result) + : "w"(tab), "w"(idx) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vqtbx1q_u8 (uint8x16_t r, uint8x16_t tab, uint8x16_t idx) +{ + uint8x16_t result = r; + __asm__ ("tbx %0.16b,{%1.16b},%2.16b" + : "+w"(result) + : "w"(tab), "w"(idx) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vqtbx1q_p8 (poly8x16_t r, poly8x16_t tab, uint8x16_t idx) +{ + poly8x16_t result = r; + __asm__ ("tbx %0.16b,{%1.16b},%2.16b" + : "+w"(result) + : "w"(tab), "w"(idx) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vqtbx2_s8 (int8x8_t r, int8x16x2_t tab, uint8x8_t idx) +{ + int8x8_t result = r; + __asm__ ("ld1 {v16.16b, v17.16b}, %1\n\t" + "tbx %0.8b, {v16.16b, v17.16b}, %2.8b\n\t" + :"+w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17"); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vqtbx2_u8 (uint8x8_t r, uint8x16x2_t tab, uint8x8_t idx) +{ + uint8x8_t result = r; + __asm__ ("ld1 {v16.16b, v17.16b}, %1\n\t" + "tbx %0.8b, {v16.16b, v17.16b}, %2.8b\n\t" + :"+w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17"); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vqtbx2_p8 (poly8x8_t r, poly8x16x2_t tab, uint8x8_t idx) +{ + poly8x8_t result = r; + __asm__ ("ld1 {v16.16b, v17.16b}, %1\n\t" + "tbx %0.8b, {v16.16b, v17.16b}, %2.8b\n\t" + :"+w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17"); + return result; +} + + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vqtbx2q_s8 (int8x16_t r, int8x16x2_t tab, uint8x16_t idx) +{ + int8x16_t result = r; + __asm__ ("ld1 {v16.16b, v17.16b}, %1\n\t" + "tbx %0.16b, {v16.16b, v17.16b}, %2.16b\n\t" + :"+w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17"); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vqtbx2q_u8 (uint8x16_t r, uint8x16x2_t tab, uint8x16_t idx) +{ + uint8x16_t result = r; + __asm__ ("ld1 {v16.16b, v17.16b}, %1\n\t" + "tbx %0.16b, {v16.16b, v17.16b}, %2.16b\n\t" + :"+w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17"); + return result; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vqtbx2q_p8 (poly8x16_t r, poly8x16x2_t tab, uint8x16_t idx) +{ + poly8x16_t result = r; + __asm__ ("ld1 {v16.16b, v17.16b}, %1\n\t" + "tbx %0.16b, {v16.16b, v17.16b}, %2.16b\n\t" + :"+w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17"); + return result; +} + + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vqtbx3_s8 (int8x8_t r, int8x16x3_t tab, uint8x8_t idx) +{ + int8x8_t result = r; + __asm__ ("ld1 {v16.16b - v18.16b}, %1\n\t" + "tbx %0.8b, {v16.16b - v18.16b}, %2.8b\n\t" + :"+w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18"); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vqtbx3_u8 (uint8x8_t r, uint8x16x3_t tab, uint8x8_t idx) +{ + uint8x8_t result = r; + __asm__ ("ld1 {v16.16b - v18.16b}, %1\n\t" + "tbx %0.8b, {v16.16b - v18.16b}, %2.8b\n\t" + :"+w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18"); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vqtbx3_p8 (poly8x8_t r, poly8x16x3_t tab, uint8x8_t idx) +{ + poly8x8_t result = r; + __asm__ ("ld1 {v16.16b - v18.16b}, %1\n\t" + "tbx %0.8b, {v16.16b - v18.16b}, %2.8b\n\t" + :"+w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18"); + return result; +} + + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vqtbx3q_s8 (int8x16_t r, int8x16x3_t tab, uint8x16_t idx) +{ + int8x16_t result = r; + __asm__ ("ld1 {v16.16b - v18.16b}, %1\n\t" + "tbx %0.16b, {v16.16b - v18.16b}, %2.16b\n\t" + :"+w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18"); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vqtbx3q_u8 (uint8x16_t r, uint8x16x3_t tab, uint8x16_t idx) +{ + uint8x16_t result = r; + __asm__ ("ld1 {v16.16b - v18.16b}, %1\n\t" + "tbx %0.16b, {v16.16b - v18.16b}, %2.16b\n\t" + :"+w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18"); + return result; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vqtbx3q_p8 (poly8x16_t r, poly8x16x3_t tab, uint8x16_t idx) +{ + poly8x16_t result = r; + __asm__ ("ld1 {v16.16b - v18.16b}, %1\n\t" + "tbx %0.16b, {v16.16b - v18.16b}, %2.16b\n\t" + :"+w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18"); + return result; +} + + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vqtbx4_s8 (int8x8_t r, int8x16x4_t tab, uint8x8_t idx) +{ + int8x8_t result = r; + __asm__ ("ld1 {v16.16b - v19.16b}, %1\n\t" + "tbx %0.8b, {v16.16b - v19.16b}, %2.8b\n\t" + :"+w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18", "v19"); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vqtbx4_u8 (uint8x8_t r, uint8x16x4_t tab, uint8x8_t idx) +{ + uint8x8_t result = r; + __asm__ ("ld1 {v16.16b - v19.16b}, %1\n\t" + "tbx %0.8b, {v16.16b - v19.16b}, %2.8b\n\t" + :"+w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18", "v19"); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vqtbx4_p8 (poly8x8_t r, poly8x16x4_t tab, uint8x8_t idx) +{ + poly8x8_t result = r; + __asm__ ("ld1 {v16.16b - v19.16b}, %1\n\t" + "tbx %0.8b, {v16.16b - v19.16b}, %2.8b\n\t" + :"+w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18", "v19"); + return result; +} + + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vqtbx4q_s8 (int8x16_t r, int8x16x4_t tab, uint8x16_t idx) +{ + int8x16_t result = r; + __asm__ ("ld1 {v16.16b - v19.16b}, %1\n\t" + "tbx %0.16b, {v16.16b - v19.16b}, %2.16b\n\t" + :"+w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18", "v19"); + return result; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vqtbx4q_u8 (uint8x16_t r, uint8x16x4_t tab, uint8x16_t idx) +{ + uint8x16_t result = r; + __asm__ ("ld1 {v16.16b - v19.16b}, %1\n\t" + "tbx %0.16b, {v16.16b - v19.16b}, %2.16b\n\t" + :"+w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18", "v19"); + return result; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vqtbx4q_p8 (poly8x16_t r, poly8x16x4_t tab, uint8x16_t idx) +{ + poly8x16_t result = r; + __asm__ ("ld1 {v16.16b - v19.16b}, %1\n\t" + "tbx %0.16b, {v16.16b - v19.16b}, %2.16b\n\t" + :"+w"(result) + :"Q"(tab),"w"(idx) + :"memory", "v16", "v17", "v18", "v19"); + return result; +} + +/* V7 legacy table intrinsics. */ + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vtbl1_s8 (int8x8_t tab, int8x8_t idx) +{ + int8x8_t result; + int8x16_t temp = vcombine_s8 (tab, vcreate_s8 (__AARCH64_UINT64_C (0x0))); + __asm__ ("tbl %0.8b, {%1.16b}, %2.8b" + : "=w"(result) + : "w"(temp), "w"(idx) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vtbl1_u8 (uint8x8_t tab, uint8x8_t idx) +{ + uint8x8_t result; + uint8x16_t temp = vcombine_u8 (tab, vcreate_u8 (__AARCH64_UINT64_C (0x0))); + __asm__ ("tbl %0.8b, {%1.16b}, %2.8b" + : "=w"(result) + : "w"(temp), "w"(idx) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vtbl1_p8 (poly8x8_t tab, uint8x8_t idx) +{ + poly8x8_t result; + poly8x16_t temp = vcombine_p8 (tab, vcreate_p8 (__AARCH64_UINT64_C (0x0))); + __asm__ ("tbl %0.8b, {%1.16b}, %2.8b" + : "=w"(result) + : "w"(temp), "w"(idx) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vtbl2_s8 (int8x8x2_t tab, int8x8_t idx) +{ + int8x8_t result; + int8x16_t temp = vcombine_s8 (tab.val[0], tab.val[1]); + __asm__ ("tbl %0.8b, {%1.16b}, %2.8b" + : "=w"(result) + : "w"(temp), "w"(idx) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vtbl2_u8 (uint8x8x2_t tab, uint8x8_t idx) +{ + uint8x8_t result; + uint8x16_t temp = vcombine_u8 (tab.val[0], tab.val[1]); + __asm__ ("tbl %0.8b, {%1.16b}, %2.8b" + : "=w"(result) + : "w"(temp), "w"(idx) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vtbl2_p8 (poly8x8x2_t tab, uint8x8_t idx) +{ + poly8x8_t result; + poly8x16_t temp = vcombine_p8 (tab.val[0], tab.val[1]); + __asm__ ("tbl %0.8b, {%1.16b}, %2.8b" + : "=w"(result) + : "w"(temp), "w"(idx) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vtbl3_s8 (int8x8x3_t tab, int8x8_t idx) +{ + int8x8_t result; + int8x16x2_t temp; + temp.val[0] = vcombine_s8 (tab.val[0], tab.val[1]); + temp.val[1] = vcombine_s8 (tab.val[2], vcreate_s8 (__AARCH64_UINT64_C (0x0))); + __asm__ ("ld1 {v16.16b - v17.16b }, %1\n\t" + "tbl %0.8b, {v16.16b - v17.16b}, %2.8b\n\t" + : "=w"(result) + : "Q"(temp), "w"(idx) + : "v16", "v17", "memory"); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vtbl3_u8 (uint8x8x3_t tab, uint8x8_t idx) +{ + uint8x8_t result; + uint8x16x2_t temp; + temp.val[0] = vcombine_u8 (tab.val[0], tab.val[1]); + temp.val[1] = vcombine_u8 (tab.val[2], vcreate_u8 (__AARCH64_UINT64_C (0x0))); + __asm__ ("ld1 {v16.16b - v17.16b }, %1\n\t" + "tbl %0.8b, {v16.16b - v17.16b}, %2.8b\n\t" + : "=w"(result) + : "Q"(temp), "w"(idx) + : "v16", "v17", "memory"); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vtbl3_p8 (poly8x8x3_t tab, uint8x8_t idx) +{ + poly8x8_t result; + poly8x16x2_t temp; + temp.val[0] = vcombine_p8 (tab.val[0], tab.val[1]); + temp.val[1] = vcombine_p8 (tab.val[2], vcreate_p8 (__AARCH64_UINT64_C (0x0))); + __asm__ ("ld1 {v16.16b - v17.16b }, %1\n\t" + "tbl %0.8b, {v16.16b - v17.16b}, %2.8b\n\t" + : "=w"(result) + : "Q"(temp), "w"(idx) + : "v16", "v17", "memory"); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vtbl4_s8 (int8x8x4_t tab, int8x8_t idx) +{ + int8x8_t result; + int8x16x2_t temp; + temp.val[0] = vcombine_s8 (tab.val[0], tab.val[1]); + temp.val[1] = vcombine_s8 (tab.val[2], tab.val[3]); + __asm__ ("ld1 {v16.16b - v17.16b }, %1\n\t" + "tbl %0.8b, {v16.16b - v17.16b}, %2.8b\n\t" + : "=w"(result) + : "Q"(temp), "w"(idx) + : "v16", "v17", "memory"); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vtbl4_u8 (uint8x8x4_t tab, uint8x8_t idx) +{ + uint8x8_t result; + uint8x16x2_t temp; + temp.val[0] = vcombine_u8 (tab.val[0], tab.val[1]); + temp.val[1] = vcombine_u8 (tab.val[2], tab.val[3]); + __asm__ ("ld1 {v16.16b - v17.16b }, %1\n\t" + "tbl %0.8b, {v16.16b - v17.16b}, %2.8b\n\t" + : "=w"(result) + : "Q"(temp), "w"(idx) + : "v16", "v17", "memory"); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vtbl4_p8 (poly8x8x4_t tab, uint8x8_t idx) +{ + poly8x8_t result; + poly8x16x2_t temp; + temp.val[0] = vcombine_p8 (tab.val[0], tab.val[1]); + temp.val[1] = vcombine_p8 (tab.val[2], tab.val[3]); + __asm__ ("ld1 {v16.16b - v17.16b }, %1\n\t" + "tbl %0.8b, {v16.16b - v17.16b}, %2.8b\n\t" + : "=w"(result) + : "Q"(temp), "w"(idx) + : "v16", "v17", "memory"); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vtbx2_s8 (int8x8_t r, int8x8x2_t tab, int8x8_t idx) +{ + int8x8_t result = r; + int8x16_t temp = vcombine_s8 (tab.val[0], tab.val[1]); + __asm__ ("tbx %0.8b, {%1.16b}, %2.8b" + : "+w"(result) + : "w"(temp), "w"(idx) + : /* No clobbers */); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vtbx2_u8 (uint8x8_t r, uint8x8x2_t tab, uint8x8_t idx) +{ + uint8x8_t result = r; + uint8x16_t temp = vcombine_u8 (tab.val[0], tab.val[1]); + __asm__ ("tbx %0.8b, {%1.16b}, %2.8b" + : "+w"(result) + : "w"(temp), "w"(idx) + : /* No clobbers */); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vtbx2_p8 (poly8x8_t r, poly8x8x2_t tab, uint8x8_t idx) +{ + poly8x8_t result = r; + poly8x16_t temp = vcombine_p8 (tab.val[0], tab.val[1]); + __asm__ ("tbx %0.8b, {%1.16b}, %2.8b" + : "+w"(result) + : "w"(temp), "w"(idx) + : /* No clobbers */); + return result; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vtbx4_s8 (int8x8_t r, int8x8x4_t tab, int8x8_t idx) +{ + int8x8_t result = r; + int8x16x2_t temp; + temp.val[0] = vcombine_s8 (tab.val[0], tab.val[1]); + temp.val[1] = vcombine_s8 (tab.val[2], tab.val[3]); + __asm__ ("ld1 {v16.16b - v17.16b }, %1\n\t" + "tbx %0.8b, {v16.16b - v17.16b}, %2.8b\n\t" + : "+w"(result) + : "Q"(temp), "w"(idx) + : "v16", "v17", "memory"); + return result; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vtbx4_u8 (uint8x8_t r, uint8x8x4_t tab, uint8x8_t idx) +{ + uint8x8_t result = r; + uint8x16x2_t temp; + temp.val[0] = vcombine_u8 (tab.val[0], tab.val[1]); + temp.val[1] = vcombine_u8 (tab.val[2], tab.val[3]); + __asm__ ("ld1 {v16.16b - v17.16b }, %1\n\t" + "tbx %0.8b, {v16.16b - v17.16b}, %2.8b\n\t" + : "+w"(result) + : "Q"(temp), "w"(idx) + : "v16", "v17", "memory"); + return result; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vtbx4_p8 (poly8x8_t r, poly8x8x4_t tab, uint8x8_t idx) +{ + poly8x8_t result = r; + poly8x16x2_t temp; + temp.val[0] = vcombine_p8 (tab.val[0], tab.val[1]); + temp.val[1] = vcombine_p8 (tab.val[2], tab.val[3]); + __asm__ ("ld1 {v16.16b - v17.16b }, %1\n\t" + "tbx %0.8b, {v16.16b - v17.16b}, %2.8b\n\t" + : "+w"(result) + : "Q"(temp), "w"(idx) + : "v16", "v17", "memory"); + return result; +} + +/* End of temporary inline asm. */ + +/* Start of optimal implementations in approved order. */ + +/* vabs */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vabs_f32 (float32x2_t __a) +{ + return __builtin_aarch64_absv2sf (__a); +} + +__extension__ static __inline float64x1_t __attribute__ ((__always_inline__)) +vabs_f64 (float64x1_t __a) +{ + return __builtin_fabs (__a); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vabs_s8 (int8x8_t __a) +{ + return __builtin_aarch64_absv8qi (__a); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vabs_s16 (int16x4_t __a) +{ + return __builtin_aarch64_absv4hi (__a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vabs_s32 (int32x2_t __a) +{ + return __builtin_aarch64_absv2si (__a); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vabs_s64 (int64x1_t __a) +{ + return __builtin_llabs (__a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vabsq_f32 (float32x4_t __a) +{ + return __builtin_aarch64_absv4sf (__a); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vabsq_f64 (float64x2_t __a) +{ + return __builtin_aarch64_absv2df (__a); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vabsq_s8 (int8x16_t __a) +{ + return __builtin_aarch64_absv16qi (__a); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vabsq_s16 (int16x8_t __a) +{ + return __builtin_aarch64_absv8hi (__a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vabsq_s32 (int32x4_t __a) +{ + return __builtin_aarch64_absv4si (__a); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vabsq_s64 (int64x2_t __a) +{ + return __builtin_aarch64_absv2di (__a); +} + +/* vadd */ + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vaddd_s64 (int64x1_t __a, int64x1_t __b) +{ + return __a + __b; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vaddd_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return __a + __b; +} + +/* vaddv */ + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vaddv_s8 (int8x8_t __a) +{ + return vget_lane_s8 (__builtin_aarch64_reduc_splus_v8qi (__a), 0); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vaddv_s16 (int16x4_t __a) +{ + return vget_lane_s16 (__builtin_aarch64_reduc_splus_v4hi (__a), 0); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vaddv_s32 (int32x2_t __a) +{ + return vget_lane_s32 (__builtin_aarch64_reduc_splus_v2si (__a), 0); +} + +__extension__ static __inline uint8_t __attribute__ ((__always_inline__)) +vaddv_u8 (uint8x8_t __a) +{ + return vget_lane_u8 ((uint8x8_t) + __builtin_aarch64_reduc_uplus_v8qi ((int8x8_t) __a), + 0); +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vaddv_u16 (uint16x4_t __a) +{ + return vget_lane_u16 ((uint16x4_t) + __builtin_aarch64_reduc_uplus_v4hi ((int16x4_t) __a), + 0); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vaddv_u32 (uint32x2_t __a) +{ + return vget_lane_u32 ((uint32x2_t) + __builtin_aarch64_reduc_uplus_v2si ((int32x2_t) __a), + 0); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vaddvq_s8 (int8x16_t __a) +{ + return vgetq_lane_s8 (__builtin_aarch64_reduc_splus_v16qi (__a), + 0); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vaddvq_s16 (int16x8_t __a) +{ + return vgetq_lane_s16 (__builtin_aarch64_reduc_splus_v8hi (__a), 0); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vaddvq_s32 (int32x4_t __a) +{ + return vgetq_lane_s32 (__builtin_aarch64_reduc_splus_v4si (__a), 0); +} + +__extension__ static __inline int64_t __attribute__ ((__always_inline__)) +vaddvq_s64 (int64x2_t __a) +{ + return vgetq_lane_s64 (__builtin_aarch64_reduc_splus_v2di (__a), 0); +} + +__extension__ static __inline uint8_t __attribute__ ((__always_inline__)) +vaddvq_u8 (uint8x16_t __a) +{ + return vgetq_lane_u8 ((uint8x16_t) + __builtin_aarch64_reduc_uplus_v16qi ((int8x16_t) __a), + 0); +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vaddvq_u16 (uint16x8_t __a) +{ + return vgetq_lane_u16 ((uint16x8_t) + __builtin_aarch64_reduc_uplus_v8hi ((int16x8_t) __a), + 0); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vaddvq_u32 (uint32x4_t __a) +{ + return vgetq_lane_u32 ((uint32x4_t) + __builtin_aarch64_reduc_uplus_v4si ((int32x4_t) __a), + 0); +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vaddvq_u64 (uint64x2_t __a) +{ + return vgetq_lane_u64 ((uint64x2_t) + __builtin_aarch64_reduc_uplus_v2di ((int64x2_t) __a), + 0); +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vaddv_f32 (float32x2_t __a) +{ + float32x2_t __t = __builtin_aarch64_reduc_splus_v2sf (__a); + return vget_lane_f32 (__t, 0); +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vaddvq_f32 (float32x4_t __a) +{ + float32x4_t __t = __builtin_aarch64_reduc_splus_v4sf (__a); + return vgetq_lane_f32 (__t, 0); +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vaddvq_f64 (float64x2_t __a) +{ + float64x2_t __t = __builtin_aarch64_reduc_splus_v2df (__a); + return vgetq_lane_f64 (__t, 0); +} + +/* vbsl */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vbsl_f32 (uint32x2_t __a, float32x2_t __b, float32x2_t __c) +{ + return __builtin_aarch64_simd_bslv2sf_suss (__a, __b, __c); +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vbsl_p8 (uint8x8_t __a, poly8x8_t __b, poly8x8_t __c) +{ + return __builtin_aarch64_simd_bslv8qi_pupp (__a, __b, __c); +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vbsl_p16 (uint16x4_t __a, poly16x4_t __b, poly16x4_t __c) +{ + return __builtin_aarch64_simd_bslv4hi_pupp (__a, __b, __c); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vbsl_s8 (uint8x8_t __a, int8x8_t __b, int8x8_t __c) +{ + return __builtin_aarch64_simd_bslv8qi_suss (__a, __b, __c); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vbsl_s16 (uint16x4_t __a, int16x4_t __b, int16x4_t __c) +{ + return __builtin_aarch64_simd_bslv4hi_suss (__a, __b, __c); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vbsl_s32 (uint32x2_t __a, int32x2_t __b, int32x2_t __c) +{ + return __builtin_aarch64_simd_bslv2si_suss (__a, __b, __c); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vbsl_s64 (uint64x1_t __a, int64x1_t __b, int64x1_t __c) +{ + return __builtin_aarch64_simd_bsldi_suss (__a, __b, __c); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vbsl_u8 (uint8x8_t __a, uint8x8_t __b, uint8x8_t __c) +{ + return __builtin_aarch64_simd_bslv8qi_uuuu (__a, __b, __c); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vbsl_u16 (uint16x4_t __a, uint16x4_t __b, uint16x4_t __c) +{ + return __builtin_aarch64_simd_bslv4hi_uuuu (__a, __b, __c); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vbsl_u32 (uint32x2_t __a, uint32x2_t __b, uint32x2_t __c) +{ + return __builtin_aarch64_simd_bslv2si_uuuu (__a, __b, __c); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vbsl_u64 (uint64x1_t __a, uint64x1_t __b, uint64x1_t __c) +{ + return __builtin_aarch64_simd_bsldi_uuuu (__a, __b, __c); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vbslq_f32 (uint32x4_t __a, float32x4_t __b, float32x4_t __c) +{ + return __builtin_aarch64_simd_bslv4sf_suss (__a, __b, __c); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vbslq_f64 (uint64x2_t __a, float64x2_t __b, float64x2_t __c) +{ + return __builtin_aarch64_simd_bslv2df_suss (__a, __b, __c); +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vbslq_p8 (uint8x16_t __a, poly8x16_t __b, poly8x16_t __c) +{ + return __builtin_aarch64_simd_bslv16qi_pupp (__a, __b, __c); +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vbslq_p16 (uint16x8_t __a, poly16x8_t __b, poly16x8_t __c) +{ + return __builtin_aarch64_simd_bslv8hi_pupp (__a, __b, __c); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vbslq_s8 (uint8x16_t __a, int8x16_t __b, int8x16_t __c) +{ + return __builtin_aarch64_simd_bslv16qi_suss (__a, __b, __c); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vbslq_s16 (uint16x8_t __a, int16x8_t __b, int16x8_t __c) +{ + return __builtin_aarch64_simd_bslv8hi_suss (__a, __b, __c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vbslq_s32 (uint32x4_t __a, int32x4_t __b, int32x4_t __c) +{ + return __builtin_aarch64_simd_bslv4si_suss (__a, __b, __c); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vbslq_s64 (uint64x2_t __a, int64x2_t __b, int64x2_t __c) +{ + return __builtin_aarch64_simd_bslv2di_suss (__a, __b, __c); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vbslq_u8 (uint8x16_t __a, uint8x16_t __b, uint8x16_t __c) +{ + return __builtin_aarch64_simd_bslv16qi_uuuu (__a, __b, __c); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vbslq_u16 (uint16x8_t __a, uint16x8_t __b, uint16x8_t __c) +{ + return __builtin_aarch64_simd_bslv8hi_uuuu (__a, __b, __c); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vbslq_u32 (uint32x4_t __a, uint32x4_t __b, uint32x4_t __c) +{ + return __builtin_aarch64_simd_bslv4si_uuuu (__a, __b, __c); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vbslq_u64 (uint64x2_t __a, uint64x2_t __b, uint64x2_t __c) +{ + return __builtin_aarch64_simd_bslv2di_uuuu (__a, __b, __c); +} + +#ifdef __ARM_FEATURE_CRYPTO + +/* vaes */ + +static __inline uint8x16_t +vaeseq_u8 (uint8x16_t data, uint8x16_t key) +{ + return __builtin_aarch64_crypto_aesev16qi_uuu (data, key); +} + +static __inline uint8x16_t +vaesdq_u8 (uint8x16_t data, uint8x16_t key) +{ + return __builtin_aarch64_crypto_aesdv16qi_uuu (data, key); +} + +static __inline uint8x16_t +vaesmcq_u8 (uint8x16_t data) +{ + return __builtin_aarch64_crypto_aesmcv16qi_uu (data); +} + +static __inline uint8x16_t +vaesimcq_u8 (uint8x16_t data) +{ + return __builtin_aarch64_crypto_aesimcv16qi_uu (data); +} + +#endif + +/* vcage */ + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vcages_f32 (float32_t __a, float32_t __b) +{ + return __builtin_fabsf (__a) >= __builtin_fabsf (__b) ? -1 : 0; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcage_f32 (float32x2_t __a, float32x2_t __b) +{ + return vabs_f32 (__a) >= vabs_f32 (__b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcageq_f32 (float32x4_t __a, float32x4_t __b) +{ + return vabsq_f32 (__a) >= vabsq_f32 (__b); +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vcaged_f64 (float64_t __a, float64_t __b) +{ + return __builtin_fabs (__a) >= __builtin_fabs (__b) ? -1 : 0; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcageq_f64 (float64x2_t __a, float64x2_t __b) +{ + return vabsq_f64 (__a) >= vabsq_f64 (__b); +} + +/* vcagt */ + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vcagts_f32 (float32_t __a, float32_t __b) +{ + return __builtin_fabsf (__a) > __builtin_fabsf (__b) ? -1 : 0; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcagt_f32 (float32x2_t __a, float32x2_t __b) +{ + return vabs_f32 (__a) > vabs_f32 (__b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcagtq_f32 (float32x4_t __a, float32x4_t __b) +{ + return vabsq_f32 (__a) > vabsq_f32 (__b); +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vcagtd_f64 (float64_t __a, float64_t __b) +{ + return __builtin_fabs (__a) > __builtin_fabs (__b) ? -1 : 0; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcagtq_f64 (float64x2_t __a, float64x2_t __b) +{ + return vabsq_f64 (__a) > vabsq_f64 (__b); +} + +/* vcale */ + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcale_f32 (float32x2_t __a, float32x2_t __b) +{ + return vabs_f32 (__a) <= vabs_f32 (__b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcaleq_f32 (float32x4_t __a, float32x4_t __b) +{ + return vabsq_f32 (__a) <= vabsq_f32 (__b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcaleq_f64 (float64x2_t __a, float64x2_t __b) +{ + return vabsq_f64 (__a) <= vabsq_f64 (__b); +} + +/* vcalt */ + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcalt_f32 (float32x2_t __a, float32x2_t __b) +{ + return vabs_f32 (__a) < vabs_f32 (__b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcaltq_f32 (float32x4_t __a, float32x4_t __b) +{ + return vabsq_f32 (__a) < vabsq_f32 (__b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcaltq_f64 (float64x2_t __a, float64x2_t __b) +{ + return vabsq_f64 (__a) < vabsq_f64 (__b); +} + +/* vceq - vector. */ + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vceq_f32 (float32x2_t __a, float32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_cmeqv2sf (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vceq_f64 (float64x1_t __a, float64x1_t __b) +{ + return __a == __b ? -1ll : 0ll; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vceq_p8 (poly8x8_t __a, poly8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_cmeqv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vceq_s8 (int8x8_t __a, int8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_cmeqv8qi (__a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vceq_s16 (int16x4_t __a, int16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_cmeqv4hi (__a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vceq_s32 (int32x2_t __a, int32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_cmeqv2si (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vceq_s64 (int64x1_t __a, int64x1_t __b) +{ + return __a == __b ? -1ll : 0ll; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vceq_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_cmeqv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vceq_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_cmeqv4hi ((int16x4_t) __a, + (int16x4_t) __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vceq_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_cmeqv2si ((int32x2_t) __a, + (int32x2_t) __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vceq_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return __a == __b ? -1ll : 0ll; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vceqq_f32 (float32x4_t __a, float32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_cmeqv4sf (__a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vceqq_f64 (float64x2_t __a, float64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_cmeqv2df (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vceqq_p8 (poly8x16_t __a, poly8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_cmeqv16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vceqq_s8 (int8x16_t __a, int8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_cmeqv16qi (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vceqq_s16 (int16x8_t __a, int16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_cmeqv8hi (__a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vceqq_s32 (int32x4_t __a, int32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_cmeqv4si (__a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vceqq_s64 (int64x2_t __a, int64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_cmeqv2di (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vceqq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_cmeqv16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vceqq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_cmeqv8hi ((int16x8_t) __a, + (int16x8_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vceqq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_cmeqv4si ((int32x4_t) __a, + (int32x4_t) __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vceqq_u64 (uint64x2_t __a, uint64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_cmeqv2di ((int64x2_t) __a, + (int64x2_t) __b); +} + +/* vceq - scalar. */ + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vceqs_f32 (float32_t __a, float32_t __b) +{ + return __a == __b ? -1 : 0; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vceqd_s64 (int64x1_t __a, int64x1_t __b) +{ + return __a == __b ? -1ll : 0ll; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vceqd_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return __a == __b ? -1ll : 0ll; +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vceqd_f64 (float64_t __a, float64_t __b) +{ + return __a == __b ? -1ll : 0ll; +} + +/* vceqz - vector. */ + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vceqz_f32 (float32x2_t __a) +{ + float32x2_t __b = {0.0f, 0.0f}; + return (uint32x2_t) __builtin_aarch64_cmeqv2sf (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vceqz_f64 (float64x1_t __a) +{ + return __a == 0.0 ? -1ll : 0ll; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vceqz_p8 (poly8x8_t __a) +{ + poly8x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x8_t) __builtin_aarch64_cmeqv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vceqz_s8 (int8x8_t __a) +{ + int8x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x8_t) __builtin_aarch64_cmeqv8qi (__a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vceqz_s16 (int16x4_t __a) +{ + int16x4_t __b = {0, 0, 0, 0}; + return (uint16x4_t) __builtin_aarch64_cmeqv4hi (__a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vceqz_s32 (int32x2_t __a) +{ + int32x2_t __b = {0, 0}; + return (uint32x2_t) __builtin_aarch64_cmeqv2si (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vceqz_s64 (int64x1_t __a) +{ + return __a == 0ll ? -1ll : 0ll; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vceqz_u8 (uint8x8_t __a) +{ + uint8x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x8_t) __builtin_aarch64_cmeqv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vceqz_u16 (uint16x4_t __a) +{ + uint16x4_t __b = {0, 0, 0, 0}; + return (uint16x4_t) __builtin_aarch64_cmeqv4hi ((int16x4_t) __a, + (int16x4_t) __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vceqz_u32 (uint32x2_t __a) +{ + uint32x2_t __b = {0, 0}; + return (uint32x2_t) __builtin_aarch64_cmeqv2si ((int32x2_t) __a, + (int32x2_t) __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vceqz_u64 (uint64x1_t __a) +{ + return __a == 0ll ? -1ll : 0ll; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vceqzq_f32 (float32x4_t __a) +{ + float32x4_t __b = {0.0f, 0.0f, 0.0f, 0.0f}; + return (uint32x4_t) __builtin_aarch64_cmeqv4sf (__a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vceqzq_f64 (float64x2_t __a) +{ + float64x2_t __b = {0.0, 0.0}; + return (uint64x2_t) __builtin_aarch64_cmeqv2df (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vceqzq_p8 (poly8x16_t __a) +{ + poly8x16_t __b = {0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x16_t) __builtin_aarch64_cmeqv16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vceqzq_s8 (int8x16_t __a) +{ + int8x16_t __b = {0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x16_t) __builtin_aarch64_cmeqv16qi (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vceqzq_s16 (int16x8_t __a) +{ + int16x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint16x8_t) __builtin_aarch64_cmeqv8hi (__a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vceqzq_s32 (int32x4_t __a) +{ + int32x4_t __b = {0, 0, 0, 0}; + return (uint32x4_t) __builtin_aarch64_cmeqv4si (__a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vceqzq_s64 (int64x2_t __a) +{ + int64x2_t __b = {0, 0}; + return (uint64x2_t) __builtin_aarch64_cmeqv2di (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vceqzq_u8 (uint8x16_t __a) +{ + uint8x16_t __b = {0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x16_t) __builtin_aarch64_cmeqv16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vceqzq_u16 (uint16x8_t __a) +{ + uint16x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint16x8_t) __builtin_aarch64_cmeqv8hi ((int16x8_t) __a, + (int16x8_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vceqzq_u32 (uint32x4_t __a) +{ + uint32x4_t __b = {0, 0, 0, 0}; + return (uint32x4_t) __builtin_aarch64_cmeqv4si ((int32x4_t) __a, + (int32x4_t) __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vceqzq_u64 (uint64x2_t __a) +{ + uint64x2_t __b = {0, 0}; + return (uint64x2_t) __builtin_aarch64_cmeqv2di ((int64x2_t) __a, + (int64x2_t) __b); +} + +/* vceqz - scalar. */ + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vceqzs_f32 (float32_t __a) +{ + return __a == 0.0f ? -1 : 0; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vceqzd_s64 (int64x1_t __a) +{ + return __a == 0 ? -1ll : 0ll; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vceqzd_u64 (int64x1_t __a) +{ + return __a == 0 ? -1ll : 0ll; +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vceqzd_f64 (float64_t __a) +{ + return __a == 0.0 ? -1ll : 0ll; +} + +/* vcge - vector. */ + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcge_f32 (float32x2_t __a, float32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_cmgev2sf (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcge_f64 (float64x1_t __a, float64x1_t __b) +{ + return __a >= __b ? -1ll : 0ll; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vcge_p8 (poly8x8_t __a, poly8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_cmgev8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vcge_s8 (int8x8_t __a, int8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_cmgev8qi (__a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vcge_s16 (int16x4_t __a, int16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_cmgev4hi (__a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcge_s32 (int32x2_t __a, int32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_cmgev2si (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcge_s64 (int64x1_t __a, int64x1_t __b) +{ + return __a >= __b ? -1ll : 0ll; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vcge_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_cmgeuv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vcge_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_cmgeuv4hi ((int16x4_t) __a, + (int16x4_t) __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcge_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_cmgeuv2si ((int32x2_t) __a, + (int32x2_t) __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcge_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return __a >= __b ? -1ll : 0ll; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcgeq_f32 (float32x4_t __a, float32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_cmgev4sf (__a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcgeq_f64 (float64x2_t __a, float64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_cmgev2df (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcgeq_p8 (poly8x16_t __a, poly8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_cmgev16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcgeq_s8 (int8x16_t __a, int8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_cmgev16qi (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vcgeq_s16 (int16x8_t __a, int16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_cmgev8hi (__a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcgeq_s32 (int32x4_t __a, int32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_cmgev4si (__a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcgeq_s64 (int64x2_t __a, int64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_cmgev2di (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcgeq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_cmgeuv16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vcgeq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_cmgeuv8hi ((int16x8_t) __a, + (int16x8_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcgeq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_cmgeuv4si ((int32x4_t) __a, + (int32x4_t) __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcgeq_u64 (uint64x2_t __a, uint64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_cmgeuv2di ((int64x2_t) __a, + (int64x2_t) __b); +} + +/* vcge - scalar. */ + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vcges_f32 (float32_t __a, float32_t __b) +{ + return __a >= __b ? -1 : 0; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcged_s64 (int64x1_t __a, int64x1_t __b) +{ + return __a >= __b ? -1ll : 0ll; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcged_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return __a >= __b ? -1ll : 0ll; +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vcged_f64 (float64_t __a, float64_t __b) +{ + return __a >= __b ? -1ll : 0ll; +} + +/* vcgez - vector. */ + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcgez_f32 (float32x2_t __a) +{ + float32x2_t __b = {0.0f, 0.0f}; + return (uint32x2_t) __builtin_aarch64_cmgev2sf (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcgez_f64 (float64x1_t __a) +{ + return __a >= 0.0 ? -1ll : 0ll; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vcgez_p8 (poly8x8_t __a) +{ + poly8x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x8_t) __builtin_aarch64_cmgev8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vcgez_s8 (int8x8_t __a) +{ + int8x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x8_t) __builtin_aarch64_cmgev8qi (__a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vcgez_s16 (int16x4_t __a) +{ + int16x4_t __b = {0, 0, 0, 0}; + return (uint16x4_t) __builtin_aarch64_cmgev4hi (__a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcgez_s32 (int32x2_t __a) +{ + int32x2_t __b = {0, 0}; + return (uint32x2_t) __builtin_aarch64_cmgev2si (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcgez_s64 (int64x1_t __a) +{ + return __a >= 0ll ? -1ll : 0ll; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vcgez_u8 (uint8x8_t __a) +{ + uint8x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x8_t) __builtin_aarch64_cmgeuv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vcgez_u16 (uint16x4_t __a) +{ + uint16x4_t __b = {0, 0, 0, 0}; + return (uint16x4_t) __builtin_aarch64_cmgeuv4hi ((int16x4_t) __a, + (int16x4_t) __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcgez_u32 (uint32x2_t __a) +{ + uint32x2_t __b = {0, 0}; + return (uint32x2_t) __builtin_aarch64_cmgeuv2si ((int32x2_t) __a, + (int32x2_t) __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcgez_u64 (uint64x1_t __a) +{ + return __a >= 0ll ? -1ll : 0ll; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcgezq_f32 (float32x4_t __a) +{ + float32x4_t __b = {0.0f, 0.0f, 0.0f, 0.0f}; + return (uint32x4_t) __builtin_aarch64_cmgev4sf (__a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcgezq_f64 (float64x2_t __a) +{ + float64x2_t __b = {0.0, 0.0}; + return (uint64x2_t) __builtin_aarch64_cmgev2df (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcgezq_p8 (poly8x16_t __a) +{ + poly8x16_t __b = {0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x16_t) __builtin_aarch64_cmgev16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcgezq_s8 (int8x16_t __a) +{ + int8x16_t __b = {0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x16_t) __builtin_aarch64_cmgev16qi (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vcgezq_s16 (int16x8_t __a) +{ + int16x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint16x8_t) __builtin_aarch64_cmgev8hi (__a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcgezq_s32 (int32x4_t __a) +{ + int32x4_t __b = {0, 0, 0, 0}; + return (uint32x4_t) __builtin_aarch64_cmgev4si (__a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcgezq_s64 (int64x2_t __a) +{ + int64x2_t __b = {0, 0}; + return (uint64x2_t) __builtin_aarch64_cmgev2di (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcgezq_u8 (uint8x16_t __a) +{ + uint8x16_t __b = {0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x16_t) __builtin_aarch64_cmgeuv16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vcgezq_u16 (uint16x8_t __a) +{ + uint16x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint16x8_t) __builtin_aarch64_cmgeuv8hi ((int16x8_t) __a, + (int16x8_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcgezq_u32 (uint32x4_t __a) +{ + uint32x4_t __b = {0, 0, 0, 0}; + return (uint32x4_t) __builtin_aarch64_cmgeuv4si ((int32x4_t) __a, + (int32x4_t) __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcgezq_u64 (uint64x2_t __a) +{ + uint64x2_t __b = {0, 0}; + return (uint64x2_t) __builtin_aarch64_cmgeuv2di ((int64x2_t) __a, + (int64x2_t) __b); +} + +/* vcgez - scalar. */ + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vcgezs_f32 (float32_t __a) +{ + return __a >= 0.0f ? -1 : 0; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcgezd_s64 (int64x1_t __a) +{ + return __a >= 0 ? -1ll : 0ll; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcgezd_u64 (int64x1_t __a) +{ + return __a >= 0 ? -1ll : 0ll; +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vcgezd_f64 (float64_t __a) +{ + return __a >= 0.0 ? -1ll : 0ll; +} + +/* vcgt - vector. */ + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcgt_f32 (float32x2_t __a, float32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_cmgtv2sf (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcgt_f64 (float64x1_t __a, float64x1_t __b) +{ + return __a > __b ? -1ll : 0ll; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vcgt_p8 (poly8x8_t __a, poly8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_cmgtv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vcgt_s8 (int8x8_t __a, int8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_cmgtv8qi (__a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vcgt_s16 (int16x4_t __a, int16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_cmgtv4hi (__a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcgt_s32 (int32x2_t __a, int32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_cmgtv2si (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcgt_s64 (int64x1_t __a, int64x1_t __b) +{ + return __a > __b ? -1ll : 0ll; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vcgt_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_cmgtuv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vcgt_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_cmgtuv4hi ((int16x4_t) __a, + (int16x4_t) __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcgt_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_cmgtuv2si ((int32x2_t) __a, + (int32x2_t) __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcgt_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return __a > __b ? -1ll : 0ll; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcgtq_f32 (float32x4_t __a, float32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_cmgtv4sf (__a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcgtq_f64 (float64x2_t __a, float64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_cmgtv2df (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcgtq_p8 (poly8x16_t __a, poly8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_cmgtv16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcgtq_s8 (int8x16_t __a, int8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_cmgtv16qi (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vcgtq_s16 (int16x8_t __a, int16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_cmgtv8hi (__a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcgtq_s32 (int32x4_t __a, int32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_cmgtv4si (__a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcgtq_s64 (int64x2_t __a, int64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_cmgtv2di (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcgtq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_cmgtuv16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vcgtq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_cmgtuv8hi ((int16x8_t) __a, + (int16x8_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcgtq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_cmgtuv4si ((int32x4_t) __a, + (int32x4_t) __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcgtq_u64 (uint64x2_t __a, uint64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_cmgtuv2di ((int64x2_t) __a, + (int64x2_t) __b); +} + +/* vcgt - scalar. */ + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vcgts_f32 (float32_t __a, float32_t __b) +{ + return __a > __b ? -1 : 0; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcgtd_s64 (int64x1_t __a, int64x1_t __b) +{ + return __a > __b ? -1ll : 0ll; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcgtd_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return __a > __b ? -1ll : 0ll; +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vcgtd_f64 (float64_t __a, float64_t __b) +{ + return __a > __b ? -1ll : 0ll; +} + +/* vcgtz - vector. */ + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcgtz_f32 (float32x2_t __a) +{ + float32x2_t __b = {0.0f, 0.0f}; + return (uint32x2_t) __builtin_aarch64_cmgtv2sf (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcgtz_f64 (float64x1_t __a) +{ + return __a > 0.0 ? -1ll : 0ll; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vcgtz_p8 (poly8x8_t __a) +{ + poly8x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x8_t) __builtin_aarch64_cmgtv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vcgtz_s8 (int8x8_t __a) +{ + int8x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x8_t) __builtin_aarch64_cmgtv8qi (__a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vcgtz_s16 (int16x4_t __a) +{ + int16x4_t __b = {0, 0, 0, 0}; + return (uint16x4_t) __builtin_aarch64_cmgtv4hi (__a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcgtz_s32 (int32x2_t __a) +{ + int32x2_t __b = {0, 0}; + return (uint32x2_t) __builtin_aarch64_cmgtv2si (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcgtz_s64 (int64x1_t __a) +{ + return __a > 0ll ? -1ll : 0ll; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vcgtz_u8 (uint8x8_t __a) +{ + uint8x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x8_t) __builtin_aarch64_cmgtuv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vcgtz_u16 (uint16x4_t __a) +{ + uint16x4_t __b = {0, 0, 0, 0}; + return (uint16x4_t) __builtin_aarch64_cmgtuv4hi ((int16x4_t) __a, + (int16x4_t) __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcgtz_u32 (uint32x2_t __a) +{ + uint32x2_t __b = {0, 0}; + return (uint32x2_t) __builtin_aarch64_cmgtuv2si ((int32x2_t) __a, + (int32x2_t) __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcgtz_u64 (uint64x1_t __a) +{ + return __a > 0ll ? -1ll : 0ll; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcgtzq_f32 (float32x4_t __a) +{ + float32x4_t __b = {0.0f, 0.0f, 0.0f, 0.0f}; + return (uint32x4_t) __builtin_aarch64_cmgtv4sf (__a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcgtzq_f64 (float64x2_t __a) +{ + float64x2_t __b = {0.0, 0.0}; + return (uint64x2_t) __builtin_aarch64_cmgtv2df (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcgtzq_p8 (poly8x16_t __a) +{ + poly8x16_t __b = {0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x16_t) __builtin_aarch64_cmgtv16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcgtzq_s8 (int8x16_t __a) +{ + int8x16_t __b = {0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x16_t) __builtin_aarch64_cmgtv16qi (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vcgtzq_s16 (int16x8_t __a) +{ + int16x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint16x8_t) __builtin_aarch64_cmgtv8hi (__a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcgtzq_s32 (int32x4_t __a) +{ + int32x4_t __b = {0, 0, 0, 0}; + return (uint32x4_t) __builtin_aarch64_cmgtv4si (__a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcgtzq_s64 (int64x2_t __a) +{ + int64x2_t __b = {0, 0}; + return (uint64x2_t) __builtin_aarch64_cmgtv2di (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcgtzq_u8 (uint8x16_t __a) +{ + uint8x16_t __b = {0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x16_t) __builtin_aarch64_cmgtuv16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vcgtzq_u16 (uint16x8_t __a) +{ + uint16x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint16x8_t) __builtin_aarch64_cmgtuv8hi ((int16x8_t) __a, + (int16x8_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcgtzq_u32 (uint32x4_t __a) +{ + uint32x4_t __b = {0, 0, 0, 0}; + return (uint32x4_t) __builtin_aarch64_cmgtuv4si ((int32x4_t) __a, + (int32x4_t) __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcgtzq_u64 (uint64x2_t __a) +{ + uint64x2_t __b = {0, 0}; + return (uint64x2_t) __builtin_aarch64_cmgtuv2di ((int64x2_t) __a, + (int64x2_t) __b); +} + +/* vcgtz - scalar. */ + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vcgtzs_f32 (float32_t __a) +{ + return __a > 0.0f ? -1 : 0; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcgtzd_s64 (int64x1_t __a) +{ + return __a > 0 ? -1ll : 0ll; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcgtzd_u64 (int64x1_t __a) +{ + return __a > 0 ? -1ll : 0ll; +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vcgtzd_f64 (float64_t __a) +{ + return __a > 0.0 ? -1ll : 0ll; +} + +/* vcle - vector. */ + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcle_f32 (float32x2_t __a, float32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_cmgev2sf (__b, __a); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcle_f64 (float64x1_t __a, float64x1_t __b) +{ + return __a <= __b ? -1ll : 0ll; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vcle_p8 (poly8x8_t __a, poly8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_cmgev8qi ((int8x8_t) __b, + (int8x8_t) __a); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vcle_s8 (int8x8_t __a, int8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_cmgev8qi (__b, __a); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vcle_s16 (int16x4_t __a, int16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_cmgev4hi (__b, __a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcle_s32 (int32x2_t __a, int32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_cmgev2si (__b, __a); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcle_s64 (int64x1_t __a, int64x1_t __b) +{ + return __a <= __b ? -1ll : 0ll; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vcle_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_cmgeuv8qi ((int8x8_t) __b, + (int8x8_t) __a); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vcle_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_cmgeuv4hi ((int16x4_t) __b, + (int16x4_t) __a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcle_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_cmgeuv2si ((int32x2_t) __b, + (int32x2_t) __a); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcle_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return __a <= __b ? -1ll : 0ll; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcleq_f32 (float32x4_t __a, float32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_cmgev4sf (__b, __a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcleq_f64 (float64x2_t __a, float64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_cmgev2df (__b, __a); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcleq_p8 (poly8x16_t __a, poly8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_cmgev16qi ((int8x16_t) __b, + (int8x16_t) __a); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcleq_s8 (int8x16_t __a, int8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_cmgev16qi (__b, __a); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vcleq_s16 (int16x8_t __a, int16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_cmgev8hi (__b, __a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcleq_s32 (int32x4_t __a, int32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_cmgev4si (__b, __a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcleq_s64 (int64x2_t __a, int64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_cmgev2di (__b, __a); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcleq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_cmgeuv16qi ((int8x16_t) __b, + (int8x16_t) __a); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vcleq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_cmgeuv8hi ((int16x8_t) __b, + (int16x8_t) __a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcleq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_cmgeuv4si ((int32x4_t) __b, + (int32x4_t) __a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcleq_u64 (uint64x2_t __a, uint64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_cmgeuv2di ((int64x2_t) __b, + (int64x2_t) __a); +} + +/* vcle - scalar. */ + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vcles_f32 (float32_t __a, float32_t __b) +{ + return __a <= __b ? -1 : 0; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcled_s64 (int64x1_t __a, int64x1_t __b) +{ + return __a <= __b ? -1ll : 0ll; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcled_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return __a <= __b ? -1ll : 0ll; +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vcled_f64 (float64_t __a, float64_t __b) +{ + return __a <= __b ? -1ll : 0ll; +} + +/* vclez - vector. */ + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vclez_f32 (float32x2_t __a) +{ + float32x2_t __b = {0.0f, 0.0f}; + return (uint32x2_t) __builtin_aarch64_cmlev2sf (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vclez_f64 (float64x1_t __a) +{ + return __a <= 0.0 ? -1ll : 0ll; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vclez_p8 (poly8x8_t __a) +{ + poly8x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x8_t) __builtin_aarch64_cmlev8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vclez_s8 (int8x8_t __a) +{ + int8x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x8_t) __builtin_aarch64_cmlev8qi (__a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vclez_s16 (int16x4_t __a) +{ + int16x4_t __b = {0, 0, 0, 0}; + return (uint16x4_t) __builtin_aarch64_cmlev4hi (__a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vclez_s32 (int32x2_t __a) +{ + int32x2_t __b = {0, 0}; + return (uint32x2_t) __builtin_aarch64_cmlev2si (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vclez_s64 (int64x1_t __a) +{ + return __a <= 0ll ? -1ll : 0ll; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vclez_u64 (uint64x1_t __a) +{ + return __a <= 0ll ? -1ll : 0ll; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vclezq_f32 (float32x4_t __a) +{ + float32x4_t __b = {0.0f, 0.0f, 0.0f, 0.0f}; + return (uint32x4_t) __builtin_aarch64_cmlev4sf (__a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vclezq_f64 (float64x2_t __a) +{ + float64x2_t __b = {0.0, 0.0}; + return (uint64x2_t) __builtin_aarch64_cmlev2df (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vclezq_p8 (poly8x16_t __a) +{ + poly8x16_t __b = {0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x16_t) __builtin_aarch64_cmlev16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vclezq_s8 (int8x16_t __a) +{ + int8x16_t __b = {0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x16_t) __builtin_aarch64_cmlev16qi (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vclezq_s16 (int16x8_t __a) +{ + int16x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint16x8_t) __builtin_aarch64_cmlev8hi (__a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vclezq_s32 (int32x4_t __a) +{ + int32x4_t __b = {0, 0, 0, 0}; + return (uint32x4_t) __builtin_aarch64_cmlev4si (__a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vclezq_s64 (int64x2_t __a) +{ + int64x2_t __b = {0, 0}; + return (uint64x2_t) __builtin_aarch64_cmlev2di (__a, __b); +} + +/* vclez - scalar. */ + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vclezs_f32 (float32_t __a) +{ + return __a <= 0.0f ? -1 : 0; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vclezd_s64 (int64x1_t __a) +{ + return __a <= 0 ? -1ll : 0ll; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vclezd_u64 (int64x1_t __a) +{ + return __a <= 0 ? -1ll : 0ll; +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vclezd_f64 (float64_t __a) +{ + return __a <= 0.0 ? -1ll : 0ll; +} + +/* vclt - vector. */ + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vclt_f32 (float32x2_t __a, float32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_cmgtv2sf (__b, __a); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vclt_f64 (float64x1_t __a, float64x1_t __b) +{ + return __a < __b ? -1ll : 0ll; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vclt_p8 (poly8x8_t __a, poly8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_cmgtv8qi ((int8x8_t) __b, + (int8x8_t) __a); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vclt_s8 (int8x8_t __a, int8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_cmgtv8qi (__b, __a); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vclt_s16 (int16x4_t __a, int16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_cmgtv4hi (__b, __a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vclt_s32 (int32x2_t __a, int32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_cmgtv2si (__b, __a); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vclt_s64 (int64x1_t __a, int64x1_t __b) +{ + return __a < __b ? -1ll : 0ll; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vclt_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_cmgtuv8qi ((int8x8_t) __b, + (int8x8_t) __a); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vclt_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_cmgtuv4hi ((int16x4_t) __b, + (int16x4_t) __a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vclt_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_cmgtuv2si ((int32x2_t) __b, + (int32x2_t) __a); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vclt_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return __a < __b ? -1ll : 0ll; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcltq_f32 (float32x4_t __a, float32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_cmgtv4sf (__b, __a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcltq_f64 (float64x2_t __a, float64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_cmgtv2df (__b, __a); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcltq_p8 (poly8x16_t __a, poly8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_cmgtv16qi ((int8x16_t) __b, + (int8x16_t) __a); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcltq_s8 (int8x16_t __a, int8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_cmgtv16qi (__b, __a); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vcltq_s16 (int16x8_t __a, int16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_cmgtv8hi (__b, __a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcltq_s32 (int32x4_t __a, int32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_cmgtv4si (__b, __a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcltq_s64 (int64x2_t __a, int64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_cmgtv2di (__b, __a); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcltq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_cmgtuv16qi ((int8x16_t) __b, + (int8x16_t) __a); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vcltq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_cmgtuv8hi ((int16x8_t) __b, + (int16x8_t) __a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcltq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_cmgtuv4si ((int32x4_t) __b, + (int32x4_t) __a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcltq_u64 (uint64x2_t __a, uint64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_cmgtuv2di ((int64x2_t) __b, + (int64x2_t) __a); +} + +/* vclt - scalar. */ + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vclts_f32 (float32_t __a, float32_t __b) +{ + return __a < __b ? -1 : 0; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcltd_s64 (int64x1_t __a, int64x1_t __b) +{ + return __a < __b ? -1ll : 0ll; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcltd_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return __a < __b ? -1ll : 0ll; +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vcltd_f64 (float64_t __a, float64_t __b) +{ + return __a < __b ? -1ll : 0ll; +} + +/* vcltz - vector. */ + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcltz_f32 (float32x2_t __a) +{ + float32x2_t __b = {0.0f, 0.0f}; + return (uint32x2_t) __builtin_aarch64_cmltv2sf (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcltz_f64 (float64x1_t __a) +{ + return __a < 0.0 ? -1ll : 0ll; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vcltz_p8 (poly8x8_t __a) +{ + poly8x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x8_t) __builtin_aarch64_cmltv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vcltz_s8 (int8x8_t __a) +{ + int8x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x8_t) __builtin_aarch64_cmltv8qi (__a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vcltz_s16 (int16x4_t __a) +{ + int16x4_t __b = {0, 0, 0, 0}; + return (uint16x4_t) __builtin_aarch64_cmltv4hi (__a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcltz_s32 (int32x2_t __a) +{ + int32x2_t __b = {0, 0}; + return (uint32x2_t) __builtin_aarch64_cmltv2si (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcltz_s64 (int64x1_t __a) +{ + return __a < 0ll ? -1ll : 0ll; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcltzq_f32 (float32x4_t __a) +{ + float32x4_t __b = {0.0f, 0.0f, 0.0f, 0.0f}; + return (uint32x4_t) __builtin_aarch64_cmltv4sf (__a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcltzq_f64 (float64x2_t __a) +{ + float64x2_t __b = {0.0, 0.0}; + return (uint64x2_t) __builtin_aarch64_cmltv2df (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcltzq_p8 (poly8x16_t __a) +{ + poly8x16_t __b = {0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x16_t) __builtin_aarch64_cmltv16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vcltzq_s8 (int8x16_t __a) +{ + int8x16_t __b = {0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0}; + return (uint8x16_t) __builtin_aarch64_cmltv16qi (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vcltzq_s16 (int16x8_t __a) +{ + int16x8_t __b = {0, 0, 0, 0, 0, 0, 0, 0}; + return (uint16x8_t) __builtin_aarch64_cmltv8hi (__a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcltzq_s32 (int32x4_t __a) +{ + int32x4_t __b = {0, 0, 0, 0}; + return (uint32x4_t) __builtin_aarch64_cmltv4si (__a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcltzq_s64 (int64x2_t __a) +{ + int64x2_t __b = {0, 0}; + return (uint64x2_t) __builtin_aarch64_cmltv2di (__a, __b); +} + +/* vcltz - scalar. */ + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vcltzs_f32 (float32_t __a) +{ + return __a < 0.0f ? -1 : 0; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcltzd_s64 (int64x1_t __a) +{ + return __a < 0 ? -1ll : 0ll; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vcltzd_u64 (int64x1_t __a) +{ + return __a < 0 ? -1ll : 0ll; +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vcltzd_f64 (float64_t __a) +{ + return __a < 0.0 ? -1ll : 0ll; +} + +/* vclz. */ + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vclz_s8 (int8x8_t __a) +{ + return __builtin_aarch64_clzv8qi (__a); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vclz_s16 (int16x4_t __a) +{ + return __builtin_aarch64_clzv4hi (__a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vclz_s32 (int32x2_t __a) +{ + return __builtin_aarch64_clzv2si (__a); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vclz_u8 (uint8x8_t __a) +{ + return (uint8x8_t)__builtin_aarch64_clzv8qi ((int8x8_t)__a); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vclz_u16 (uint16x4_t __a) +{ + return (uint16x4_t)__builtin_aarch64_clzv4hi ((int16x4_t)__a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vclz_u32 (uint32x2_t __a) +{ + return (uint32x2_t)__builtin_aarch64_clzv2si ((int32x2_t)__a); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vclzq_s8 (int8x16_t __a) +{ + return __builtin_aarch64_clzv16qi (__a); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vclzq_s16 (int16x8_t __a) +{ + return __builtin_aarch64_clzv8hi (__a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vclzq_s32 (int32x4_t __a) +{ + return __builtin_aarch64_clzv4si (__a); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vclzq_u8 (uint8x16_t __a) +{ + return (uint8x16_t)__builtin_aarch64_clzv16qi ((int8x16_t)__a); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vclzq_u16 (uint16x8_t __a) +{ + return (uint16x8_t)__builtin_aarch64_clzv8hi ((int16x8_t)__a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vclzq_u32 (uint32x4_t __a) +{ + return (uint32x4_t)__builtin_aarch64_clzv4si ((int32x4_t)__a); +} + +/* vcvt (double -> float). */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vcvt_f32_f64 (float64x2_t __a) +{ + return __builtin_aarch64_float_truncate_lo_v2sf (__a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vcvt_high_f32_f64 (float32x2_t __a, float64x2_t __b) +{ + return __builtin_aarch64_float_truncate_hi_v4sf (__a, __b); +} + +/* vcvt (float -> double). */ + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vcvt_f64_f32 (float32x2_t __a) +{ + + return __builtin_aarch64_float_extend_lo_v2df (__a); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vcvt_high_f64_f32 (float32x4_t __a) +{ + return __builtin_aarch64_vec_unpacks_hi_v4sf (__a); +} + +/* vcvt (<u>int -> float) */ + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vcvtd_f64_s64 (int64_t __a) +{ + return (float64_t) __a; +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vcvtd_f64_u64 (uint64_t __a) +{ + return (float64_t) __a; +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vcvts_f32_s32 (int32_t __a) +{ + return (float32_t) __a; +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vcvts_f32_u32 (uint32_t __a) +{ + return (float32_t) __a; +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vcvt_f32_s32 (int32x2_t __a) +{ + return __builtin_aarch64_floatv2siv2sf (__a); +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vcvt_f32_u32 (uint32x2_t __a) +{ + return __builtin_aarch64_floatunsv2siv2sf ((int32x2_t) __a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vcvtq_f32_s32 (int32x4_t __a) +{ + return __builtin_aarch64_floatv4siv4sf (__a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vcvtq_f32_u32 (uint32x4_t __a) +{ + return __builtin_aarch64_floatunsv4siv4sf ((int32x4_t) __a); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vcvtq_f64_s64 (int64x2_t __a) +{ + return __builtin_aarch64_floatv2div2df (__a); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vcvtq_f64_u64 (uint64x2_t __a) +{ + return __builtin_aarch64_floatunsv2div2df ((int64x2_t) __a); +} + +/* vcvt (float -> <u>int) */ + +__extension__ static __inline int64_t __attribute__ ((__always_inline__)) +vcvtd_s64_f64 (float64_t __a) +{ + return (int64_t) __a; +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vcvtd_u64_f64 (float64_t __a) +{ + return (uint64_t) __a; +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vcvts_s32_f32 (float32_t __a) +{ + return (int32_t) __a; +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vcvts_u32_f32 (float32_t __a) +{ + return (uint32_t) __a; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vcvt_s32_f32 (float32x2_t __a) +{ + return __builtin_aarch64_lbtruncv2sfv2si (__a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcvt_u32_f32 (float32x2_t __a) +{ + /* TODO: This cast should go away when builtins have + their correct types. */ + return (uint32x2_t) __builtin_aarch64_lbtruncuv2sfv2si (__a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vcvtq_s32_f32 (float32x4_t __a) +{ + return __builtin_aarch64_lbtruncv4sfv4si (__a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcvtq_u32_f32 (float32x4_t __a) +{ + /* TODO: This cast should go away when builtins have + their correct types. */ + return (uint32x4_t) __builtin_aarch64_lbtruncuv4sfv4si (__a); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vcvtq_s64_f64 (float64x2_t __a) +{ + return __builtin_aarch64_lbtruncv2dfv2di (__a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcvtq_u64_f64 (float64x2_t __a) +{ + /* TODO: This cast should go away when builtins have + their correct types. */ + return (uint64x2_t) __builtin_aarch64_lbtruncuv2dfv2di (__a); +} + +/* vcvta */ + +__extension__ static __inline int64_t __attribute__ ((__always_inline__)) +vcvtad_s64_f64 (float64_t __a) +{ + return __builtin_aarch64_lrounddfdi (__a); +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vcvtad_u64_f64 (float64_t __a) +{ + return __builtin_aarch64_lroundudfdi (__a); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vcvtas_s32_f32 (float32_t __a) +{ + return __builtin_aarch64_lroundsfsi (__a); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vcvtas_u32_f32 (float32_t __a) +{ + return __builtin_aarch64_lroundusfsi (__a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vcvta_s32_f32 (float32x2_t __a) +{ + return __builtin_aarch64_lroundv2sfv2si (__a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcvta_u32_f32 (float32x2_t __a) +{ + /* TODO: This cast should go away when builtins have + their correct types. */ + return (uint32x2_t) __builtin_aarch64_lrounduv2sfv2si (__a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vcvtaq_s32_f32 (float32x4_t __a) +{ + return __builtin_aarch64_lroundv4sfv4si (__a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcvtaq_u32_f32 (float32x4_t __a) +{ + /* TODO: This cast should go away when builtins have + their correct types. */ + return (uint32x4_t) __builtin_aarch64_lrounduv4sfv4si (__a); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vcvtaq_s64_f64 (float64x2_t __a) +{ + return __builtin_aarch64_lroundv2dfv2di (__a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcvtaq_u64_f64 (float64x2_t __a) +{ + /* TODO: This cast should go away when builtins have + their correct types. */ + return (uint64x2_t) __builtin_aarch64_lrounduv2dfv2di (__a); +} + +/* vcvtm */ + +__extension__ static __inline int64_t __attribute__ ((__always_inline__)) +vcvtmd_s64_f64 (float64_t __a) +{ + return __builtin_llfloor (__a); +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vcvtmd_u64_f64 (float64_t __a) +{ + return __builtin_aarch64_lfloorudfdi (__a); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vcvtms_s32_f32 (float32_t __a) +{ + return __builtin_ifloorf (__a); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vcvtms_u32_f32 (float32_t __a) +{ + return __builtin_aarch64_lfloorusfsi (__a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vcvtm_s32_f32 (float32x2_t __a) +{ + return __builtin_aarch64_lfloorv2sfv2si (__a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcvtm_u32_f32 (float32x2_t __a) +{ + /* TODO: This cast should go away when builtins have + their correct types. */ + return (uint32x2_t) __builtin_aarch64_lflooruv2sfv2si (__a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vcvtmq_s32_f32 (float32x4_t __a) +{ + return __builtin_aarch64_lfloorv4sfv4si (__a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcvtmq_u32_f32 (float32x4_t __a) +{ + /* TODO: This cast should go away when builtins have + their correct types. */ + return (uint32x4_t) __builtin_aarch64_lflooruv4sfv4si (__a); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vcvtmq_s64_f64 (float64x2_t __a) +{ + return __builtin_aarch64_lfloorv2dfv2di (__a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcvtmq_u64_f64 (float64x2_t __a) +{ + /* TODO: This cast should go away when builtins have + their correct types. */ + return (uint64x2_t) __builtin_aarch64_lflooruv2dfv2di (__a); +} + +/* vcvtn */ + +__extension__ static __inline int64_t __attribute__ ((__always_inline__)) +vcvtnd_s64_f64 (float64_t __a) +{ + return __builtin_aarch64_lfrintndfdi (__a); +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vcvtnd_u64_f64 (float64_t __a) +{ + return __builtin_aarch64_lfrintnudfdi (__a); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vcvtns_s32_f32 (float32_t __a) +{ + return __builtin_aarch64_lfrintnsfsi (__a); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vcvtns_u32_f32 (float32_t __a) +{ + return __builtin_aarch64_lfrintnusfsi (__a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vcvtn_s32_f32 (float32x2_t __a) +{ + return __builtin_aarch64_lfrintnv2sfv2si (__a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcvtn_u32_f32 (float32x2_t __a) +{ + /* TODO: This cast should go away when builtins have + their correct types. */ + return (uint32x2_t) __builtin_aarch64_lfrintnuv2sfv2si (__a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vcvtnq_s32_f32 (float32x4_t __a) +{ + return __builtin_aarch64_lfrintnv4sfv4si (__a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcvtnq_u32_f32 (float32x4_t __a) +{ + /* TODO: This cast should go away when builtins have + their correct types. */ + return (uint32x4_t) __builtin_aarch64_lfrintnuv4sfv4si (__a); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vcvtnq_s64_f64 (float64x2_t __a) +{ + return __builtin_aarch64_lfrintnv2dfv2di (__a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcvtnq_u64_f64 (float64x2_t __a) +{ + /* TODO: This cast should go away when builtins have + their correct types. */ + return (uint64x2_t) __builtin_aarch64_lfrintnuv2dfv2di (__a); +} + +/* vcvtp */ + +__extension__ static __inline int64_t __attribute__ ((__always_inline__)) +vcvtpd_s64_f64 (float64_t __a) +{ + return __builtin_llceil (__a); +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vcvtpd_u64_f64 (float64_t __a) +{ + return __builtin_aarch64_lceiludfdi (__a); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vcvtps_s32_f32 (float32_t __a) +{ + return __builtin_iceilf (__a); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vcvtps_u32_f32 (float32_t __a) +{ + return __builtin_aarch64_lceilusfsi (__a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vcvtp_s32_f32 (float32x2_t __a) +{ + return __builtin_aarch64_lceilv2sfv2si (__a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vcvtp_u32_f32 (float32x2_t __a) +{ + /* TODO: This cast should go away when builtins have + their correct types. */ + return (uint32x2_t) __builtin_aarch64_lceiluv2sfv2si (__a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vcvtpq_s32_f32 (float32x4_t __a) +{ + return __builtin_aarch64_lceilv4sfv4si (__a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vcvtpq_u32_f32 (float32x4_t __a) +{ + /* TODO: This cast should go away when builtins have + their correct types. */ + return (uint32x4_t) __builtin_aarch64_lceiluv4sfv4si (__a); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vcvtpq_s64_f64 (float64x2_t __a) +{ + return __builtin_aarch64_lceilv2dfv2di (__a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vcvtpq_u64_f64 (float64x2_t __a) +{ + /* TODO: This cast should go away when builtins have + their correct types. */ + return (uint64x2_t) __builtin_aarch64_lceiluv2dfv2di (__a); +} + +/* vdup_n */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vdup_n_f32 (float32_t __a) +{ + return (float32x2_t) {__a, __a}; +} + +__extension__ static __inline float64x1_t __attribute__ ((__always_inline__)) +vdup_n_f64 (float64_t __a) +{ + return __a; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vdup_n_p8 (poly8_t __a) +{ + return (poly8x8_t) {__a, __a, __a, __a, __a, __a, __a, __a}; +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vdup_n_p16 (poly16_t __a) +{ + return (poly16x4_t) {__a, __a, __a, __a}; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vdup_n_s8 (int8_t __a) +{ + return (int8x8_t) {__a, __a, __a, __a, __a, __a, __a, __a}; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vdup_n_s16 (int16_t __a) +{ + return (int16x4_t) {__a, __a, __a, __a}; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vdup_n_s32 (int32_t __a) +{ + return (int32x2_t) {__a, __a}; +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vdup_n_s64 (int64_t __a) +{ + return __a; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vdup_n_u8 (uint8_t __a) +{ + return (uint8x8_t) {__a, __a, __a, __a, __a, __a, __a, __a}; +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vdup_n_u16 (uint16_t __a) +{ + return (uint16x4_t) {__a, __a, __a, __a}; +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vdup_n_u32 (uint32_t __a) +{ + return (uint32x2_t) {__a, __a}; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vdup_n_u64 (uint64_t __a) +{ + return __a; +} + +/* vdupq_n */ + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vdupq_n_f32 (float32_t __a) +{ + return (float32x4_t) {__a, __a, __a, __a}; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vdupq_n_f64 (float64_t __a) +{ + return (float64x2_t) {__a, __a}; +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vdupq_n_p8 (uint32_t __a) +{ + return (poly8x16_t) {__a, __a, __a, __a, __a, __a, __a, __a, + __a, __a, __a, __a, __a, __a, __a, __a}; +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vdupq_n_p16 (uint32_t __a) +{ + return (poly16x8_t) {__a, __a, __a, __a, __a, __a, __a, __a}; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vdupq_n_s8 (int32_t __a) +{ + return (int8x16_t) {__a, __a, __a, __a, __a, __a, __a, __a, + __a, __a, __a, __a, __a, __a, __a, __a}; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vdupq_n_s16 (int32_t __a) +{ + return (int16x8_t) {__a, __a, __a, __a, __a, __a, __a, __a}; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vdupq_n_s32 (int32_t __a) +{ + return (int32x4_t) {__a, __a, __a, __a}; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vdupq_n_s64 (int64_t __a) +{ + return (int64x2_t) {__a, __a}; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vdupq_n_u8 (uint32_t __a) +{ + return (uint8x16_t) {__a, __a, __a, __a, __a, __a, __a, __a, + __a, __a, __a, __a, __a, __a, __a, __a}; +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vdupq_n_u16 (uint32_t __a) +{ + return (uint16x8_t) {__a, __a, __a, __a, __a, __a, __a, __a}; +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vdupq_n_u32 (uint32_t __a) +{ + return (uint32x4_t) {__a, __a, __a, __a}; +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vdupq_n_u64 (uint64_t __a) +{ + return (uint64x2_t) {__a, __a}; +} + +/* vdup_lane */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vdup_lane_f32 (float32x2_t __a, const int __b) +{ + return __aarch64_vdup_lane_f32 (__a, __b); +} + +__extension__ static __inline float64x1_t __attribute__ ((__always_inline__)) +vdup_lane_f64 (float64x1_t __a, const int __b) +{ + return __aarch64_vdup_lane_f64 (__a, __b); +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vdup_lane_p8 (poly8x8_t __a, const int __b) +{ + return __aarch64_vdup_lane_p8 (__a, __b); +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vdup_lane_p16 (poly16x4_t __a, const int __b) +{ + return __aarch64_vdup_lane_p16 (__a, __b); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vdup_lane_s8 (int8x8_t __a, const int __b) +{ + return __aarch64_vdup_lane_s8 (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vdup_lane_s16 (int16x4_t __a, const int __b) +{ + return __aarch64_vdup_lane_s16 (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vdup_lane_s32 (int32x2_t __a, const int __b) +{ + return __aarch64_vdup_lane_s32 (__a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vdup_lane_s64 (int64x1_t __a, const int __b) +{ + return __aarch64_vdup_lane_s64 (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vdup_lane_u8 (uint8x8_t __a, const int __b) +{ + return __aarch64_vdup_lane_u8 (__a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vdup_lane_u16 (uint16x4_t __a, const int __b) +{ + return __aarch64_vdup_lane_u16 (__a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vdup_lane_u32 (uint32x2_t __a, const int __b) +{ + return __aarch64_vdup_lane_u32 (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vdup_lane_u64 (uint64x1_t __a, const int __b) +{ + return __aarch64_vdup_lane_u64 (__a, __b); +} + +/* vdup_laneq */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vdup_laneq_f32 (float32x4_t __a, const int __b) +{ + return __aarch64_vdup_laneq_f32 (__a, __b); +} + +__extension__ static __inline float64x1_t __attribute__ ((__always_inline__)) +vdup_laneq_f64 (float64x2_t __a, const int __b) +{ + return __aarch64_vdup_laneq_f64 (__a, __b); +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vdup_laneq_p8 (poly8x16_t __a, const int __b) +{ + return __aarch64_vdup_laneq_p8 (__a, __b); +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vdup_laneq_p16 (poly16x8_t __a, const int __b) +{ + return __aarch64_vdup_laneq_p16 (__a, __b); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vdup_laneq_s8 (int8x16_t __a, const int __b) +{ + return __aarch64_vdup_laneq_s8 (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vdup_laneq_s16 (int16x8_t __a, const int __b) +{ + return __aarch64_vdup_laneq_s16 (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vdup_laneq_s32 (int32x4_t __a, const int __b) +{ + return __aarch64_vdup_laneq_s32 (__a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vdup_laneq_s64 (int64x2_t __a, const int __b) +{ + return __aarch64_vdup_laneq_s64 (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vdup_laneq_u8 (uint8x16_t __a, const int __b) +{ + return __aarch64_vdup_laneq_u8 (__a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vdup_laneq_u16 (uint16x8_t __a, const int __b) +{ + return __aarch64_vdup_laneq_u16 (__a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vdup_laneq_u32 (uint32x4_t __a, const int __b) +{ + return __aarch64_vdup_laneq_u32 (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vdup_laneq_u64 (uint64x2_t __a, const int __b) +{ + return __aarch64_vdup_laneq_u64 (__a, __b); +} + +/* vdupq_lane */ +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vdupq_lane_f32 (float32x2_t __a, const int __b) +{ + return __aarch64_vdupq_lane_f32 (__a, __b); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vdupq_lane_f64 (float64x1_t __a, const int __b) +{ + return __aarch64_vdupq_lane_f64 (__a, __b); +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vdupq_lane_p8 (poly8x8_t __a, const int __b) +{ + return __aarch64_vdupq_lane_p8 (__a, __b); +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vdupq_lane_p16 (poly16x4_t __a, const int __b) +{ + return __aarch64_vdupq_lane_p16 (__a, __b); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vdupq_lane_s8 (int8x8_t __a, const int __b) +{ + return __aarch64_vdupq_lane_s8 (__a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vdupq_lane_s16 (int16x4_t __a, const int __b) +{ + return __aarch64_vdupq_lane_s16 (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vdupq_lane_s32 (int32x2_t __a, const int __b) +{ + return __aarch64_vdupq_lane_s32 (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vdupq_lane_s64 (int64x1_t __a, const int __b) +{ + return __aarch64_vdupq_lane_s64 (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vdupq_lane_u8 (uint8x8_t __a, const int __b) +{ + return __aarch64_vdupq_lane_u8 (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vdupq_lane_u16 (uint16x4_t __a, const int __b) +{ + return __aarch64_vdupq_lane_u16 (__a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vdupq_lane_u32 (uint32x2_t __a, const int __b) +{ + return __aarch64_vdupq_lane_u32 (__a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vdupq_lane_u64 (uint64x1_t __a, const int __b) +{ + return __aarch64_vdupq_lane_u64 (__a, __b); +} + +/* vdupq_laneq */ +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vdupq_laneq_f32 (float32x4_t __a, const int __b) +{ + return __aarch64_vdupq_laneq_f32 (__a, __b); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vdupq_laneq_f64 (float64x2_t __a, const int __b) +{ + return __aarch64_vdupq_laneq_f64 (__a, __b); +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vdupq_laneq_p8 (poly8x16_t __a, const int __b) +{ + return __aarch64_vdupq_laneq_p8 (__a, __b); +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vdupq_laneq_p16 (poly16x8_t __a, const int __b) +{ + return __aarch64_vdupq_laneq_p16 (__a, __b); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vdupq_laneq_s8 (int8x16_t __a, const int __b) +{ + return __aarch64_vdupq_laneq_s8 (__a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vdupq_laneq_s16 (int16x8_t __a, const int __b) +{ + return __aarch64_vdupq_laneq_s16 (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vdupq_laneq_s32 (int32x4_t __a, const int __b) +{ + return __aarch64_vdupq_laneq_s32 (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vdupq_laneq_s64 (int64x2_t __a, const int __b) +{ + return __aarch64_vdupq_laneq_s64 (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vdupq_laneq_u8 (uint8x16_t __a, const int __b) +{ + return __aarch64_vdupq_laneq_u8 (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vdupq_laneq_u16 (uint16x8_t __a, const int __b) +{ + return __aarch64_vdupq_laneq_u16 (__a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vdupq_laneq_u32 (uint32x4_t __a, const int __b) +{ + return __aarch64_vdupq_laneq_u32 (__a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vdupq_laneq_u64 (uint64x2_t __a, const int __b) +{ + return __aarch64_vdupq_laneq_u64 (__a, __b); +} + +/* vdupb_lane */ +__extension__ static __inline poly8_t __attribute__ ((__always_inline__)) +vdupb_lane_p8 (poly8x8_t __a, const int __b) +{ + return __aarch64_vget_lane_p8 (__a, __b); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vdupb_lane_s8 (int8x8_t __a, const int __b) +{ + return __aarch64_vget_lane_s8 (__a, __b); +} + +__extension__ static __inline uint8_t __attribute__ ((__always_inline__)) +vdupb_lane_u8 (uint8x8_t __a, const int __b) +{ + return __aarch64_vget_lane_u8 (__a, __b); +} + +/* vduph_lane */ +__extension__ static __inline poly16_t __attribute__ ((__always_inline__)) +vduph_lane_p16 (poly16x4_t __a, const int __b) +{ + return __aarch64_vget_lane_p16 (__a, __b); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vduph_lane_s16 (int16x4_t __a, const int __b) +{ + return __aarch64_vget_lane_s16 (__a, __b); +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vduph_lane_u16 (uint16x4_t __a, const int __b) +{ + return __aarch64_vget_lane_u16 (__a, __b); +} + +/* vdups_lane */ +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vdups_lane_f32 (float32x2_t __a, const int __b) +{ + return __aarch64_vget_lane_f32 (__a, __b); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vdups_lane_s32 (int32x2_t __a, const int __b) +{ + return __aarch64_vget_lane_s32 (__a, __b); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vdups_lane_u32 (uint32x2_t __a, const int __b) +{ + return __aarch64_vget_lane_u32 (__a, __b); +} + +/* vdupd_lane */ +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vdupd_lane_f64 (float64x1_t __a, const int __attribute__ ((unused)) __b) +{ + return __a; +} + +__extension__ static __inline int64_t __attribute__ ((__always_inline__)) +vdupd_lane_s64 (int64x1_t __a, const int __attribute__ ((unused)) __b) +{ + return __a; +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vdupd_lane_u64 (uint64x1_t __a, const int __attribute__ ((unused)) __b) +{ + return __a; +} + +/* vdupb_laneq */ +__extension__ static __inline poly8_t __attribute__ ((__always_inline__)) +vdupb_laneq_p8 (poly8x16_t __a, const int __b) +{ + return __aarch64_vgetq_lane_p8 (__a, __b); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vdupb_laneq_s8 (int8x16_t __a, const int __attribute__ ((unused)) __b) +{ + return __aarch64_vgetq_lane_s8 (__a, __b); +} + +__extension__ static __inline uint8_t __attribute__ ((__always_inline__)) +vdupb_laneq_u8 (uint8x16_t __a, const int __b) +{ + return __aarch64_vgetq_lane_u8 (__a, __b); +} + +/* vduph_laneq */ +__extension__ static __inline poly16_t __attribute__ ((__always_inline__)) +vduph_laneq_p16 (poly16x8_t __a, const int __b) +{ + return __aarch64_vgetq_lane_p16 (__a, __b); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vduph_laneq_s16 (int16x8_t __a, const int __b) +{ + return __aarch64_vgetq_lane_s16 (__a, __b); +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vduph_laneq_u16 (uint16x8_t __a, const int __b) +{ + return __aarch64_vgetq_lane_u16 (__a, __b); +} + +/* vdups_laneq */ +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vdups_laneq_f32 (float32x4_t __a, const int __b) +{ + return __aarch64_vgetq_lane_f32 (__a, __b); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vdups_laneq_s32 (int32x4_t __a, const int __b) +{ + return __aarch64_vgetq_lane_s32 (__a, __b); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vdups_laneq_u32 (uint32x4_t __a, const int __b) +{ + return __aarch64_vgetq_lane_u32 (__a, __b); +} + +/* vdupd_laneq */ +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vdupd_laneq_f64 (float64x2_t __a, const int __b) +{ + return __aarch64_vgetq_lane_f64 (__a, __b); +} + +__extension__ static __inline int64_t __attribute__ ((__always_inline__)) +vdupd_laneq_s64 (int64x2_t __a, const int __b) +{ + return __aarch64_vgetq_lane_s64 (__a, __b); +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vdupd_laneq_u64 (uint64x2_t __a, const int __b) +{ + return __aarch64_vgetq_lane_u64 (__a, __b); +} + +/* vfma_lane */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vfma_lane_f32 (float32x2_t __a, float32x2_t __b, + float32x2_t __c, const int __lane) +{ + return __builtin_aarch64_fmav2sf (__b, + __aarch64_vdup_lane_f32 (__c, __lane), + __a); +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vfma_lane_f64 (float64_t __a, float64_t __b, + float64_t __c, const int __lane) +{ + return __builtin_fma (__b, __c, __a); +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vfmad_lane_f64 (float64_t __a, float64_t __b, + float64_t __c, const int __lane) +{ + return __builtin_fma (__b, __c, __a); +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vfmas_lane_f32 (float32_t __a, float32_t __b, + float32x2_t __c, const int __lane) +{ + return __builtin_fmaf (__b, __aarch64_vget_lane_f32 (__c, __lane), __a); +} + +/* vfma_laneq */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vfma_laneq_f32 (float32x2_t __a, float32x2_t __b, + float32x4_t __c, const int __lane) +{ + return __builtin_aarch64_fmav2sf (__b, + __aarch64_vdup_laneq_f32 (__c, __lane), + __a); +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vfma_laneq_f64 (float64_t __a, float64_t __b, + float64x2_t __c, const int __lane) +{ + return __builtin_fma (__b, __aarch64_vgetq_lane_f64 (__c, __lane), __a); +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vfmad_laneq_f64 (float64_t __a, float64_t __b, + float64x2_t __c, const int __lane) +{ + return __builtin_fma (__b, __aarch64_vgetq_lane_f64 (__c, __lane), __a); +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vfmas_laneq_f32 (float32_t __a, float32_t __b, + float32x4_t __c, const int __lane) +{ + return __builtin_fmaf (__b, __aarch64_vgetq_lane_f32 (__c, __lane), __a); +} + +/* vfmaq_lane */ + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vfmaq_lane_f32 (float32x4_t __a, float32x4_t __b, + float32x2_t __c, const int __lane) +{ + return __builtin_aarch64_fmav4sf (__b, + __aarch64_vdupq_lane_f32 (__c, __lane), + __a); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vfmaq_lane_f64 (float64x2_t __a, float64x2_t __b, + float64_t __c, const int __lane) +{ + return __builtin_aarch64_fmav2df (__b, vdupq_n_f64 (__c), __a); +} + +/* vfmaq_laneq */ + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vfmaq_laneq_f32 (float32x4_t __a, float32x4_t __b, + float32x4_t __c, const int __lane) +{ + return __builtin_aarch64_fmav4sf (__b, + __aarch64_vdupq_laneq_f32 (__c, __lane), + __a); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vfmaq_laneq_f64 (float64x2_t __a, float64x2_t __b, + float64x2_t __c, const int __lane) +{ + return __builtin_aarch64_fmav2df (__b, + __aarch64_vdupq_laneq_f64 (__c, __lane), + __a); +} + +/* vfms_lane */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vfms_lane_f32 (float32x2_t __a, float32x2_t __b, + float32x2_t __c, const int __lane) +{ + return __builtin_aarch64_fmav2sf (-__b, + __aarch64_vdup_lane_f32 (__c, __lane), + __a); +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vfms_lane_f64 (float64_t __a, float64_t __b, + float64_t __c, const int __lane) +{ + return __builtin_fma (-__b, __c, __a); +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vfmsd_lane_f64 (float64_t __a, float64_t __b, + float64_t __c, const int __lane) +{ + return __builtin_fma (-__b, __c, __a); +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vfmss_lane_f32 (float32_t __a, float32_t __b, + float32x2_t __c, const int __lane) +{ + return __builtin_fmaf (-__b, __aarch64_vget_lane_f32 (__c, __lane), __a); +} + +/* vfms_laneq */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vfms_laneq_f32 (float32x2_t __a, float32x2_t __b, + float32x4_t __c, const int __lane) +{ + return __builtin_aarch64_fmav2sf (-__b, + __aarch64_vdup_laneq_f32 (__c, __lane), + __a); +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vfms_laneq_f64 (float64_t __a, float64_t __b, + float64x2_t __c, const int __lane) +{ + return __builtin_fma (-__b, __aarch64_vgetq_lane_f64 (__c, __lane), __a); +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vfmsd_laneq_f64 (float64_t __a, float64_t __b, + float64x2_t __c, const int __lane) +{ + return __builtin_fma (-__b, __aarch64_vgetq_lane_f64 (__c, __lane), __a); +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vfmss_laneq_f32 (float32_t __a, float32_t __b, + float32x4_t __c, const int __lane) +{ + return __builtin_fmaf (-__b, __aarch64_vgetq_lane_f32 (__c, __lane), __a); +} + +/* vfmsq_lane */ + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vfmsq_lane_f32 (float32x4_t __a, float32x4_t __b, + float32x2_t __c, const int __lane) +{ + return __builtin_aarch64_fmav4sf (-__b, + __aarch64_vdupq_lane_f32 (__c, __lane), + __a); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vfmsq_lane_f64 (float64x2_t __a, float64x2_t __b, + float64_t __c, const int __lane) +{ + return __builtin_aarch64_fmav2df (-__b, vdupq_n_f64 (__c), __a); +} + +/* vfmsq_laneq */ + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vfmsq_laneq_f32 (float32x4_t __a, float32x4_t __b, + float32x4_t __c, const int __lane) +{ + return __builtin_aarch64_fmav4sf (-__b, + __aarch64_vdupq_laneq_f32 (__c, __lane), + __a); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vfmsq_laneq_f64 (float64x2_t __a, float64x2_t __b, + float64x2_t __c, const int __lane) +{ + return __builtin_aarch64_fmav2df (-__b, + __aarch64_vdupq_laneq_f64 (__c, __lane), + __a); +} + +/* vld1 */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vld1_f32 (const float32_t *a) +{ + return __builtin_aarch64_ld1v2sf ((const __builtin_aarch64_simd_sf *) a); +} + +__extension__ static __inline float64x1_t __attribute__ ((__always_inline__)) +vld1_f64 (const float64_t *a) +{ + return *a; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vld1_p8 (const poly8_t *a) +{ + return (poly8x8_t) + __builtin_aarch64_ld1v8qi ((const __builtin_aarch64_simd_qi *) a); +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vld1_p16 (const poly16_t *a) +{ + return (poly16x4_t) + __builtin_aarch64_ld1v4hi ((const __builtin_aarch64_simd_hi *) a); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vld1_s8 (const int8_t *a) +{ + return __builtin_aarch64_ld1v8qi ((const __builtin_aarch64_simd_qi *) a); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vld1_s16 (const int16_t *a) +{ + return __builtin_aarch64_ld1v4hi ((const __builtin_aarch64_simd_hi *) a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vld1_s32 (const int32_t *a) +{ + return __builtin_aarch64_ld1v2si ((const __builtin_aarch64_simd_si *) a); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vld1_s64 (const int64_t *a) +{ + return *a; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vld1_u8 (const uint8_t *a) +{ + return (uint8x8_t) + __builtin_aarch64_ld1v8qi ((const __builtin_aarch64_simd_qi *) a); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vld1_u16 (const uint16_t *a) +{ + return (uint16x4_t) + __builtin_aarch64_ld1v4hi ((const __builtin_aarch64_simd_hi *) a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vld1_u32 (const uint32_t *a) +{ + return (uint32x2_t) + __builtin_aarch64_ld1v2si ((const __builtin_aarch64_simd_si *) a); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vld1_u64 (const uint64_t *a) +{ + return *a; +} + +/* vld1q */ + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vld1q_f32 (const float32_t *a) +{ + return __builtin_aarch64_ld1v4sf ((const __builtin_aarch64_simd_sf *) a); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vld1q_f64 (const float64_t *a) +{ + return __builtin_aarch64_ld1v2df ((const __builtin_aarch64_simd_df *) a); +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vld1q_p8 (const poly8_t *a) +{ + return (poly8x16_t) + __builtin_aarch64_ld1v16qi ((const __builtin_aarch64_simd_qi *) a); +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vld1q_p16 (const poly16_t *a) +{ + return (poly16x8_t) + __builtin_aarch64_ld1v8hi ((const __builtin_aarch64_simd_hi *) a); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vld1q_s8 (const int8_t *a) +{ + return __builtin_aarch64_ld1v16qi ((const __builtin_aarch64_simd_qi *) a); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vld1q_s16 (const int16_t *a) +{ + return __builtin_aarch64_ld1v8hi ((const __builtin_aarch64_simd_hi *) a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vld1q_s32 (const int32_t *a) +{ + return __builtin_aarch64_ld1v4si ((const __builtin_aarch64_simd_si *) a); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vld1q_s64 (const int64_t *a) +{ + return __builtin_aarch64_ld1v2di ((const __builtin_aarch64_simd_di *) a); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vld1q_u8 (const uint8_t *a) +{ + return (uint8x16_t) + __builtin_aarch64_ld1v16qi ((const __builtin_aarch64_simd_qi *) a); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vld1q_u16 (const uint16_t *a) +{ + return (uint16x8_t) + __builtin_aarch64_ld1v8hi ((const __builtin_aarch64_simd_hi *) a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vld1q_u32 (const uint32_t *a) +{ + return (uint32x4_t) + __builtin_aarch64_ld1v4si ((const __builtin_aarch64_simd_si *) a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vld1q_u64 (const uint64_t *a) +{ + return (uint64x2_t) + __builtin_aarch64_ld1v2di ((const __builtin_aarch64_simd_di *) a); +} + +/* vldn */ + +__extension__ static __inline int64x1x2_t __attribute__ ((__always_inline__)) +vld2_s64 (const int64_t * __a) +{ + int64x1x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2di ((const __builtin_aarch64_simd_di *) __a); + ret.val[0] = (int64x1_t) __builtin_aarch64_get_dregoidi (__o, 0); + ret.val[1] = (int64x1_t) __builtin_aarch64_get_dregoidi (__o, 1); + return ret; +} + +__extension__ static __inline uint64x1x2_t __attribute__ ((__always_inline__)) +vld2_u64 (const uint64_t * __a) +{ + uint64x1x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2di ((const __builtin_aarch64_simd_di *) __a); + ret.val[0] = (uint64x1_t) __builtin_aarch64_get_dregoidi (__o, 0); + ret.val[1] = (uint64x1_t) __builtin_aarch64_get_dregoidi (__o, 1); + return ret; +} + +__extension__ static __inline float64x1x2_t __attribute__ ((__always_inline__)) +vld2_f64 (const float64_t * __a) +{ + float64x1x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2df ((const __builtin_aarch64_simd_df *) __a); + ret.val[0] = (float64x1_t) __builtin_aarch64_get_dregoidf (__o, 0); + ret.val[1] = (float64x1_t) __builtin_aarch64_get_dregoidf (__o, 1); + return ret; +} + +__extension__ static __inline int8x8x2_t __attribute__ ((__always_inline__)) +vld2_s8 (const int8_t * __a) +{ + int8x8x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v8qi ((const __builtin_aarch64_simd_qi *) __a); + ret.val[0] = (int8x8_t) __builtin_aarch64_get_dregoiv8qi (__o, 0); + ret.val[1] = (int8x8_t) __builtin_aarch64_get_dregoiv8qi (__o, 1); + return ret; +} + +__extension__ static __inline poly8x8x2_t __attribute__ ((__always_inline__)) +vld2_p8 (const poly8_t * __a) +{ + poly8x8x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v8qi ((const __builtin_aarch64_simd_qi *) __a); + ret.val[0] = (poly8x8_t) __builtin_aarch64_get_dregoiv8qi (__o, 0); + ret.val[1] = (poly8x8_t) __builtin_aarch64_get_dregoiv8qi (__o, 1); + return ret; +} + +__extension__ static __inline int16x4x2_t __attribute__ ((__always_inline__)) +vld2_s16 (const int16_t * __a) +{ + int16x4x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v4hi ((const __builtin_aarch64_simd_hi *) __a); + ret.val[0] = (int16x4_t) __builtin_aarch64_get_dregoiv4hi (__o, 0); + ret.val[1] = (int16x4_t) __builtin_aarch64_get_dregoiv4hi (__o, 1); + return ret; +} + +__extension__ static __inline poly16x4x2_t __attribute__ ((__always_inline__)) +vld2_p16 (const poly16_t * __a) +{ + poly16x4x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v4hi ((const __builtin_aarch64_simd_hi *) __a); + ret.val[0] = (poly16x4_t) __builtin_aarch64_get_dregoiv4hi (__o, 0); + ret.val[1] = (poly16x4_t) __builtin_aarch64_get_dregoiv4hi (__o, 1); + return ret; +} + +__extension__ static __inline int32x2x2_t __attribute__ ((__always_inline__)) +vld2_s32 (const int32_t * __a) +{ + int32x2x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v2si ((const __builtin_aarch64_simd_si *) __a); + ret.val[0] = (int32x2_t) __builtin_aarch64_get_dregoiv2si (__o, 0); + ret.val[1] = (int32x2_t) __builtin_aarch64_get_dregoiv2si (__o, 1); + return ret; +} + +__extension__ static __inline uint8x8x2_t __attribute__ ((__always_inline__)) +vld2_u8 (const uint8_t * __a) +{ + uint8x8x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v8qi ((const __builtin_aarch64_simd_qi *) __a); + ret.val[0] = (uint8x8_t) __builtin_aarch64_get_dregoiv8qi (__o, 0); + ret.val[1] = (uint8x8_t) __builtin_aarch64_get_dregoiv8qi (__o, 1); + return ret; +} + +__extension__ static __inline uint16x4x2_t __attribute__ ((__always_inline__)) +vld2_u16 (const uint16_t * __a) +{ + uint16x4x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v4hi ((const __builtin_aarch64_simd_hi *) __a); + ret.val[0] = (uint16x4_t) __builtin_aarch64_get_dregoiv4hi (__o, 0); + ret.val[1] = (uint16x4_t) __builtin_aarch64_get_dregoiv4hi (__o, 1); + return ret; +} + +__extension__ static __inline uint32x2x2_t __attribute__ ((__always_inline__)) +vld2_u32 (const uint32_t * __a) +{ + uint32x2x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v2si ((const __builtin_aarch64_simd_si *) __a); + ret.val[0] = (uint32x2_t) __builtin_aarch64_get_dregoiv2si (__o, 0); + ret.val[1] = (uint32x2_t) __builtin_aarch64_get_dregoiv2si (__o, 1); + return ret; +} + +__extension__ static __inline float32x2x2_t __attribute__ ((__always_inline__)) +vld2_f32 (const float32_t * __a) +{ + float32x2x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v2sf ((const __builtin_aarch64_simd_sf *) __a); + ret.val[0] = (float32x2_t) __builtin_aarch64_get_dregoiv2sf (__o, 0); + ret.val[1] = (float32x2_t) __builtin_aarch64_get_dregoiv2sf (__o, 1); + return ret; +} + +__extension__ static __inline int8x16x2_t __attribute__ ((__always_inline__)) +vld2q_s8 (const int8_t * __a) +{ + int8x16x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v16qi ((const __builtin_aarch64_simd_qi *) __a); + ret.val[0] = (int8x16_t) __builtin_aarch64_get_qregoiv16qi (__o, 0); + ret.val[1] = (int8x16_t) __builtin_aarch64_get_qregoiv16qi (__o, 1); + return ret; +} + +__extension__ static __inline poly8x16x2_t __attribute__ ((__always_inline__)) +vld2q_p8 (const poly8_t * __a) +{ + poly8x16x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v16qi ((const __builtin_aarch64_simd_qi *) __a); + ret.val[0] = (poly8x16_t) __builtin_aarch64_get_qregoiv16qi (__o, 0); + ret.val[1] = (poly8x16_t) __builtin_aarch64_get_qregoiv16qi (__o, 1); + return ret; +} + +__extension__ static __inline int16x8x2_t __attribute__ ((__always_inline__)) +vld2q_s16 (const int16_t * __a) +{ + int16x8x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v8hi ((const __builtin_aarch64_simd_hi *) __a); + ret.val[0] = (int16x8_t) __builtin_aarch64_get_qregoiv8hi (__o, 0); + ret.val[1] = (int16x8_t) __builtin_aarch64_get_qregoiv8hi (__o, 1); + return ret; +} + +__extension__ static __inline poly16x8x2_t __attribute__ ((__always_inline__)) +vld2q_p16 (const poly16_t * __a) +{ + poly16x8x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v8hi ((const __builtin_aarch64_simd_hi *) __a); + ret.val[0] = (poly16x8_t) __builtin_aarch64_get_qregoiv8hi (__o, 0); + ret.val[1] = (poly16x8_t) __builtin_aarch64_get_qregoiv8hi (__o, 1); + return ret; +} + +__extension__ static __inline int32x4x2_t __attribute__ ((__always_inline__)) +vld2q_s32 (const int32_t * __a) +{ + int32x4x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v4si ((const __builtin_aarch64_simd_si *) __a); + ret.val[0] = (int32x4_t) __builtin_aarch64_get_qregoiv4si (__o, 0); + ret.val[1] = (int32x4_t) __builtin_aarch64_get_qregoiv4si (__o, 1); + return ret; +} + +__extension__ static __inline int64x2x2_t __attribute__ ((__always_inline__)) +vld2q_s64 (const int64_t * __a) +{ + int64x2x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v2di ((const __builtin_aarch64_simd_di *) __a); + ret.val[0] = (int64x2_t) __builtin_aarch64_get_qregoiv2di (__o, 0); + ret.val[1] = (int64x2_t) __builtin_aarch64_get_qregoiv2di (__o, 1); + return ret; +} + +__extension__ static __inline uint8x16x2_t __attribute__ ((__always_inline__)) +vld2q_u8 (const uint8_t * __a) +{ + uint8x16x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v16qi ((const __builtin_aarch64_simd_qi *) __a); + ret.val[0] = (uint8x16_t) __builtin_aarch64_get_qregoiv16qi (__o, 0); + ret.val[1] = (uint8x16_t) __builtin_aarch64_get_qregoiv16qi (__o, 1); + return ret; +} + +__extension__ static __inline uint16x8x2_t __attribute__ ((__always_inline__)) +vld2q_u16 (const uint16_t * __a) +{ + uint16x8x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v8hi ((const __builtin_aarch64_simd_hi *) __a); + ret.val[0] = (uint16x8_t) __builtin_aarch64_get_qregoiv8hi (__o, 0); + ret.val[1] = (uint16x8_t) __builtin_aarch64_get_qregoiv8hi (__o, 1); + return ret; +} + +__extension__ static __inline uint32x4x2_t __attribute__ ((__always_inline__)) +vld2q_u32 (const uint32_t * __a) +{ + uint32x4x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v4si ((const __builtin_aarch64_simd_si *) __a); + ret.val[0] = (uint32x4_t) __builtin_aarch64_get_qregoiv4si (__o, 0); + ret.val[1] = (uint32x4_t) __builtin_aarch64_get_qregoiv4si (__o, 1); + return ret; +} + +__extension__ static __inline uint64x2x2_t __attribute__ ((__always_inline__)) +vld2q_u64 (const uint64_t * __a) +{ + uint64x2x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v2di ((const __builtin_aarch64_simd_di *) __a); + ret.val[0] = (uint64x2_t) __builtin_aarch64_get_qregoiv2di (__o, 0); + ret.val[1] = (uint64x2_t) __builtin_aarch64_get_qregoiv2di (__o, 1); + return ret; +} + +__extension__ static __inline float32x4x2_t __attribute__ ((__always_inline__)) +vld2q_f32 (const float32_t * __a) +{ + float32x4x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v4sf ((const __builtin_aarch64_simd_sf *) __a); + ret.val[0] = (float32x4_t) __builtin_aarch64_get_qregoiv4sf (__o, 0); + ret.val[1] = (float32x4_t) __builtin_aarch64_get_qregoiv4sf (__o, 1); + return ret; +} + +__extension__ static __inline float64x2x2_t __attribute__ ((__always_inline__)) +vld2q_f64 (const float64_t * __a) +{ + float64x2x2_t ret; + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_ld2v2df ((const __builtin_aarch64_simd_df *) __a); + ret.val[0] = (float64x2_t) __builtin_aarch64_get_qregoiv2df (__o, 0); + ret.val[1] = (float64x2_t) __builtin_aarch64_get_qregoiv2df (__o, 1); + return ret; +} + +__extension__ static __inline int64x1x3_t __attribute__ ((__always_inline__)) +vld3_s64 (const int64_t * __a) +{ + int64x1x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3di ((const __builtin_aarch64_simd_di *) __a); + ret.val[0] = (int64x1_t) __builtin_aarch64_get_dregcidi (__o, 0); + ret.val[1] = (int64x1_t) __builtin_aarch64_get_dregcidi (__o, 1); + ret.val[2] = (int64x1_t) __builtin_aarch64_get_dregcidi (__o, 2); + return ret; +} + +__extension__ static __inline uint64x1x3_t __attribute__ ((__always_inline__)) +vld3_u64 (const uint64_t * __a) +{ + uint64x1x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3di ((const __builtin_aarch64_simd_di *) __a); + ret.val[0] = (uint64x1_t) __builtin_aarch64_get_dregcidi (__o, 0); + ret.val[1] = (uint64x1_t) __builtin_aarch64_get_dregcidi (__o, 1); + ret.val[2] = (uint64x1_t) __builtin_aarch64_get_dregcidi (__o, 2); + return ret; +} + +__extension__ static __inline float64x1x3_t __attribute__ ((__always_inline__)) +vld3_f64 (const float64_t * __a) +{ + float64x1x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3df ((const __builtin_aarch64_simd_df *) __a); + ret.val[0] = (float64x1_t) __builtin_aarch64_get_dregcidf (__o, 0); + ret.val[1] = (float64x1_t) __builtin_aarch64_get_dregcidf (__o, 1); + ret.val[2] = (float64x1_t) __builtin_aarch64_get_dregcidf (__o, 2); + return ret; +} + +__extension__ static __inline int8x8x3_t __attribute__ ((__always_inline__)) +vld3_s8 (const int8_t * __a) +{ + int8x8x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v8qi ((const __builtin_aarch64_simd_qi *) __a); + ret.val[0] = (int8x8_t) __builtin_aarch64_get_dregciv8qi (__o, 0); + ret.val[1] = (int8x8_t) __builtin_aarch64_get_dregciv8qi (__o, 1); + ret.val[2] = (int8x8_t) __builtin_aarch64_get_dregciv8qi (__o, 2); + return ret; +} + +__extension__ static __inline poly8x8x3_t __attribute__ ((__always_inline__)) +vld3_p8 (const poly8_t * __a) +{ + poly8x8x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v8qi ((const __builtin_aarch64_simd_qi *) __a); + ret.val[0] = (poly8x8_t) __builtin_aarch64_get_dregciv8qi (__o, 0); + ret.val[1] = (poly8x8_t) __builtin_aarch64_get_dregciv8qi (__o, 1); + ret.val[2] = (poly8x8_t) __builtin_aarch64_get_dregciv8qi (__o, 2); + return ret; +} + +__extension__ static __inline int16x4x3_t __attribute__ ((__always_inline__)) +vld3_s16 (const int16_t * __a) +{ + int16x4x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v4hi ((const __builtin_aarch64_simd_hi *) __a); + ret.val[0] = (int16x4_t) __builtin_aarch64_get_dregciv4hi (__o, 0); + ret.val[1] = (int16x4_t) __builtin_aarch64_get_dregciv4hi (__o, 1); + ret.val[2] = (int16x4_t) __builtin_aarch64_get_dregciv4hi (__o, 2); + return ret; +} + +__extension__ static __inline poly16x4x3_t __attribute__ ((__always_inline__)) +vld3_p16 (const poly16_t * __a) +{ + poly16x4x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v4hi ((const __builtin_aarch64_simd_hi *) __a); + ret.val[0] = (poly16x4_t) __builtin_aarch64_get_dregciv4hi (__o, 0); + ret.val[1] = (poly16x4_t) __builtin_aarch64_get_dregciv4hi (__o, 1); + ret.val[2] = (poly16x4_t) __builtin_aarch64_get_dregciv4hi (__o, 2); + return ret; +} + +__extension__ static __inline int32x2x3_t __attribute__ ((__always_inline__)) +vld3_s32 (const int32_t * __a) +{ + int32x2x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v2si ((const __builtin_aarch64_simd_si *) __a); + ret.val[0] = (int32x2_t) __builtin_aarch64_get_dregciv2si (__o, 0); + ret.val[1] = (int32x2_t) __builtin_aarch64_get_dregciv2si (__o, 1); + ret.val[2] = (int32x2_t) __builtin_aarch64_get_dregciv2si (__o, 2); + return ret; +} + +__extension__ static __inline uint8x8x3_t __attribute__ ((__always_inline__)) +vld3_u8 (const uint8_t * __a) +{ + uint8x8x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v8qi ((const __builtin_aarch64_simd_qi *) __a); + ret.val[0] = (uint8x8_t) __builtin_aarch64_get_dregciv8qi (__o, 0); + ret.val[1] = (uint8x8_t) __builtin_aarch64_get_dregciv8qi (__o, 1); + ret.val[2] = (uint8x8_t) __builtin_aarch64_get_dregciv8qi (__o, 2); + return ret; +} + +__extension__ static __inline uint16x4x3_t __attribute__ ((__always_inline__)) +vld3_u16 (const uint16_t * __a) +{ + uint16x4x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v4hi ((const __builtin_aarch64_simd_hi *) __a); + ret.val[0] = (uint16x4_t) __builtin_aarch64_get_dregciv4hi (__o, 0); + ret.val[1] = (uint16x4_t) __builtin_aarch64_get_dregciv4hi (__o, 1); + ret.val[2] = (uint16x4_t) __builtin_aarch64_get_dregciv4hi (__o, 2); + return ret; +} + +__extension__ static __inline uint32x2x3_t __attribute__ ((__always_inline__)) +vld3_u32 (const uint32_t * __a) +{ + uint32x2x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v2si ((const __builtin_aarch64_simd_si *) __a); + ret.val[0] = (uint32x2_t) __builtin_aarch64_get_dregciv2si (__o, 0); + ret.val[1] = (uint32x2_t) __builtin_aarch64_get_dregciv2si (__o, 1); + ret.val[2] = (uint32x2_t) __builtin_aarch64_get_dregciv2si (__o, 2); + return ret; +} + +__extension__ static __inline float32x2x3_t __attribute__ ((__always_inline__)) +vld3_f32 (const float32_t * __a) +{ + float32x2x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v2sf ((const __builtin_aarch64_simd_sf *) __a); + ret.val[0] = (float32x2_t) __builtin_aarch64_get_dregciv2sf (__o, 0); + ret.val[1] = (float32x2_t) __builtin_aarch64_get_dregciv2sf (__o, 1); + ret.val[2] = (float32x2_t) __builtin_aarch64_get_dregciv2sf (__o, 2); + return ret; +} + +__extension__ static __inline int8x16x3_t __attribute__ ((__always_inline__)) +vld3q_s8 (const int8_t * __a) +{ + int8x16x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v16qi ((const __builtin_aarch64_simd_qi *) __a); + ret.val[0] = (int8x16_t) __builtin_aarch64_get_qregciv16qi (__o, 0); + ret.val[1] = (int8x16_t) __builtin_aarch64_get_qregciv16qi (__o, 1); + ret.val[2] = (int8x16_t) __builtin_aarch64_get_qregciv16qi (__o, 2); + return ret; +} + +__extension__ static __inline poly8x16x3_t __attribute__ ((__always_inline__)) +vld3q_p8 (const poly8_t * __a) +{ + poly8x16x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v16qi ((const __builtin_aarch64_simd_qi *) __a); + ret.val[0] = (poly8x16_t) __builtin_aarch64_get_qregciv16qi (__o, 0); + ret.val[1] = (poly8x16_t) __builtin_aarch64_get_qregciv16qi (__o, 1); + ret.val[2] = (poly8x16_t) __builtin_aarch64_get_qregciv16qi (__o, 2); + return ret; +} + +__extension__ static __inline int16x8x3_t __attribute__ ((__always_inline__)) +vld3q_s16 (const int16_t * __a) +{ + int16x8x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v8hi ((const __builtin_aarch64_simd_hi *) __a); + ret.val[0] = (int16x8_t) __builtin_aarch64_get_qregciv8hi (__o, 0); + ret.val[1] = (int16x8_t) __builtin_aarch64_get_qregciv8hi (__o, 1); + ret.val[2] = (int16x8_t) __builtin_aarch64_get_qregciv8hi (__o, 2); + return ret; +} + +__extension__ static __inline poly16x8x3_t __attribute__ ((__always_inline__)) +vld3q_p16 (const poly16_t * __a) +{ + poly16x8x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v8hi ((const __builtin_aarch64_simd_hi *) __a); + ret.val[0] = (poly16x8_t) __builtin_aarch64_get_qregciv8hi (__o, 0); + ret.val[1] = (poly16x8_t) __builtin_aarch64_get_qregciv8hi (__o, 1); + ret.val[2] = (poly16x8_t) __builtin_aarch64_get_qregciv8hi (__o, 2); + return ret; +} + +__extension__ static __inline int32x4x3_t __attribute__ ((__always_inline__)) +vld3q_s32 (const int32_t * __a) +{ + int32x4x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v4si ((const __builtin_aarch64_simd_si *) __a); + ret.val[0] = (int32x4_t) __builtin_aarch64_get_qregciv4si (__o, 0); + ret.val[1] = (int32x4_t) __builtin_aarch64_get_qregciv4si (__o, 1); + ret.val[2] = (int32x4_t) __builtin_aarch64_get_qregciv4si (__o, 2); + return ret; +} + +__extension__ static __inline int64x2x3_t __attribute__ ((__always_inline__)) +vld3q_s64 (const int64_t * __a) +{ + int64x2x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v2di ((const __builtin_aarch64_simd_di *) __a); + ret.val[0] = (int64x2_t) __builtin_aarch64_get_qregciv2di (__o, 0); + ret.val[1] = (int64x2_t) __builtin_aarch64_get_qregciv2di (__o, 1); + ret.val[2] = (int64x2_t) __builtin_aarch64_get_qregciv2di (__o, 2); + return ret; +} + +__extension__ static __inline uint8x16x3_t __attribute__ ((__always_inline__)) +vld3q_u8 (const uint8_t * __a) +{ + uint8x16x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v16qi ((const __builtin_aarch64_simd_qi *) __a); + ret.val[0] = (uint8x16_t) __builtin_aarch64_get_qregciv16qi (__o, 0); + ret.val[1] = (uint8x16_t) __builtin_aarch64_get_qregciv16qi (__o, 1); + ret.val[2] = (uint8x16_t) __builtin_aarch64_get_qregciv16qi (__o, 2); + return ret; +} + +__extension__ static __inline uint16x8x3_t __attribute__ ((__always_inline__)) +vld3q_u16 (const uint16_t * __a) +{ + uint16x8x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v8hi ((const __builtin_aarch64_simd_hi *) __a); + ret.val[0] = (uint16x8_t) __builtin_aarch64_get_qregciv8hi (__o, 0); + ret.val[1] = (uint16x8_t) __builtin_aarch64_get_qregciv8hi (__o, 1); + ret.val[2] = (uint16x8_t) __builtin_aarch64_get_qregciv8hi (__o, 2); + return ret; +} + +__extension__ static __inline uint32x4x3_t __attribute__ ((__always_inline__)) +vld3q_u32 (const uint32_t * __a) +{ + uint32x4x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v4si ((const __builtin_aarch64_simd_si *) __a); + ret.val[0] = (uint32x4_t) __builtin_aarch64_get_qregciv4si (__o, 0); + ret.val[1] = (uint32x4_t) __builtin_aarch64_get_qregciv4si (__o, 1); + ret.val[2] = (uint32x4_t) __builtin_aarch64_get_qregciv4si (__o, 2); + return ret; +} + +__extension__ static __inline uint64x2x3_t __attribute__ ((__always_inline__)) +vld3q_u64 (const uint64_t * __a) +{ + uint64x2x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v2di ((const __builtin_aarch64_simd_di *) __a); + ret.val[0] = (uint64x2_t) __builtin_aarch64_get_qregciv2di (__o, 0); + ret.val[1] = (uint64x2_t) __builtin_aarch64_get_qregciv2di (__o, 1); + ret.val[2] = (uint64x2_t) __builtin_aarch64_get_qregciv2di (__o, 2); + return ret; +} + +__extension__ static __inline float32x4x3_t __attribute__ ((__always_inline__)) +vld3q_f32 (const float32_t * __a) +{ + float32x4x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v4sf ((const __builtin_aarch64_simd_sf *) __a); + ret.val[0] = (float32x4_t) __builtin_aarch64_get_qregciv4sf (__o, 0); + ret.val[1] = (float32x4_t) __builtin_aarch64_get_qregciv4sf (__o, 1); + ret.val[2] = (float32x4_t) __builtin_aarch64_get_qregciv4sf (__o, 2); + return ret; +} + +__extension__ static __inline float64x2x3_t __attribute__ ((__always_inline__)) +vld3q_f64 (const float64_t * __a) +{ + float64x2x3_t ret; + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_ld3v2df ((const __builtin_aarch64_simd_df *) __a); + ret.val[0] = (float64x2_t) __builtin_aarch64_get_qregciv2df (__o, 0); + ret.val[1] = (float64x2_t) __builtin_aarch64_get_qregciv2df (__o, 1); + ret.val[2] = (float64x2_t) __builtin_aarch64_get_qregciv2df (__o, 2); + return ret; +} + +__extension__ static __inline int64x1x4_t __attribute__ ((__always_inline__)) +vld4_s64 (const int64_t * __a) +{ + int64x1x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4di ((const __builtin_aarch64_simd_di *) __a); + ret.val[0] = (int64x1_t) __builtin_aarch64_get_dregxidi (__o, 0); + ret.val[1] = (int64x1_t) __builtin_aarch64_get_dregxidi (__o, 1); + ret.val[2] = (int64x1_t) __builtin_aarch64_get_dregxidi (__o, 2); + ret.val[3] = (int64x1_t) __builtin_aarch64_get_dregxidi (__o, 3); + return ret; +} + +__extension__ static __inline uint64x1x4_t __attribute__ ((__always_inline__)) +vld4_u64 (const uint64_t * __a) +{ + uint64x1x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4di ((const __builtin_aarch64_simd_di *) __a); + ret.val[0] = (uint64x1_t) __builtin_aarch64_get_dregxidi (__o, 0); + ret.val[1] = (uint64x1_t) __builtin_aarch64_get_dregxidi (__o, 1); + ret.val[2] = (uint64x1_t) __builtin_aarch64_get_dregxidi (__o, 2); + ret.val[3] = (uint64x1_t) __builtin_aarch64_get_dregxidi (__o, 3); + return ret; +} + +__extension__ static __inline float64x1x4_t __attribute__ ((__always_inline__)) +vld4_f64 (const float64_t * __a) +{ + float64x1x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4df ((const __builtin_aarch64_simd_df *) __a); + ret.val[0] = (float64x1_t) __builtin_aarch64_get_dregxidf (__o, 0); + ret.val[1] = (float64x1_t) __builtin_aarch64_get_dregxidf (__o, 1); + ret.val[2] = (float64x1_t) __builtin_aarch64_get_dregxidf (__o, 2); + ret.val[3] = (float64x1_t) __builtin_aarch64_get_dregxidf (__o, 3); + return ret; +} + +__extension__ static __inline int8x8x4_t __attribute__ ((__always_inline__)) +vld4_s8 (const int8_t * __a) +{ + int8x8x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v8qi ((const __builtin_aarch64_simd_qi *) __a); + ret.val[0] = (int8x8_t) __builtin_aarch64_get_dregxiv8qi (__o, 0); + ret.val[1] = (int8x8_t) __builtin_aarch64_get_dregxiv8qi (__o, 1); + ret.val[2] = (int8x8_t) __builtin_aarch64_get_dregxiv8qi (__o, 2); + ret.val[3] = (int8x8_t) __builtin_aarch64_get_dregxiv8qi (__o, 3); + return ret; +} + +__extension__ static __inline poly8x8x4_t __attribute__ ((__always_inline__)) +vld4_p8 (const poly8_t * __a) +{ + poly8x8x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v8qi ((const __builtin_aarch64_simd_qi *) __a); + ret.val[0] = (poly8x8_t) __builtin_aarch64_get_dregxiv8qi (__o, 0); + ret.val[1] = (poly8x8_t) __builtin_aarch64_get_dregxiv8qi (__o, 1); + ret.val[2] = (poly8x8_t) __builtin_aarch64_get_dregxiv8qi (__o, 2); + ret.val[3] = (poly8x8_t) __builtin_aarch64_get_dregxiv8qi (__o, 3); + return ret; +} + +__extension__ static __inline int16x4x4_t __attribute__ ((__always_inline__)) +vld4_s16 (const int16_t * __a) +{ + int16x4x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v4hi ((const __builtin_aarch64_simd_hi *) __a); + ret.val[0] = (int16x4_t) __builtin_aarch64_get_dregxiv4hi (__o, 0); + ret.val[1] = (int16x4_t) __builtin_aarch64_get_dregxiv4hi (__o, 1); + ret.val[2] = (int16x4_t) __builtin_aarch64_get_dregxiv4hi (__o, 2); + ret.val[3] = (int16x4_t) __builtin_aarch64_get_dregxiv4hi (__o, 3); + return ret; +} + +__extension__ static __inline poly16x4x4_t __attribute__ ((__always_inline__)) +vld4_p16 (const poly16_t * __a) +{ + poly16x4x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v4hi ((const __builtin_aarch64_simd_hi *) __a); + ret.val[0] = (poly16x4_t) __builtin_aarch64_get_dregxiv4hi (__o, 0); + ret.val[1] = (poly16x4_t) __builtin_aarch64_get_dregxiv4hi (__o, 1); + ret.val[2] = (poly16x4_t) __builtin_aarch64_get_dregxiv4hi (__o, 2); + ret.val[3] = (poly16x4_t) __builtin_aarch64_get_dregxiv4hi (__o, 3); + return ret; +} + +__extension__ static __inline int32x2x4_t __attribute__ ((__always_inline__)) +vld4_s32 (const int32_t * __a) +{ + int32x2x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v2si ((const __builtin_aarch64_simd_si *) __a); + ret.val[0] = (int32x2_t) __builtin_aarch64_get_dregxiv2si (__o, 0); + ret.val[1] = (int32x2_t) __builtin_aarch64_get_dregxiv2si (__o, 1); + ret.val[2] = (int32x2_t) __builtin_aarch64_get_dregxiv2si (__o, 2); + ret.val[3] = (int32x2_t) __builtin_aarch64_get_dregxiv2si (__o, 3); + return ret; +} + +__extension__ static __inline uint8x8x4_t __attribute__ ((__always_inline__)) +vld4_u8 (const uint8_t * __a) +{ + uint8x8x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v8qi ((const __builtin_aarch64_simd_qi *) __a); + ret.val[0] = (uint8x8_t) __builtin_aarch64_get_dregxiv8qi (__o, 0); + ret.val[1] = (uint8x8_t) __builtin_aarch64_get_dregxiv8qi (__o, 1); + ret.val[2] = (uint8x8_t) __builtin_aarch64_get_dregxiv8qi (__o, 2); + ret.val[3] = (uint8x8_t) __builtin_aarch64_get_dregxiv8qi (__o, 3); + return ret; +} + +__extension__ static __inline uint16x4x4_t __attribute__ ((__always_inline__)) +vld4_u16 (const uint16_t * __a) +{ + uint16x4x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v4hi ((const __builtin_aarch64_simd_hi *) __a); + ret.val[0] = (uint16x4_t) __builtin_aarch64_get_dregxiv4hi (__o, 0); + ret.val[1] = (uint16x4_t) __builtin_aarch64_get_dregxiv4hi (__o, 1); + ret.val[2] = (uint16x4_t) __builtin_aarch64_get_dregxiv4hi (__o, 2); + ret.val[3] = (uint16x4_t) __builtin_aarch64_get_dregxiv4hi (__o, 3); + return ret; +} + +__extension__ static __inline uint32x2x4_t __attribute__ ((__always_inline__)) +vld4_u32 (const uint32_t * __a) +{ + uint32x2x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v2si ((const __builtin_aarch64_simd_si *) __a); + ret.val[0] = (uint32x2_t) __builtin_aarch64_get_dregxiv2si (__o, 0); + ret.val[1] = (uint32x2_t) __builtin_aarch64_get_dregxiv2si (__o, 1); + ret.val[2] = (uint32x2_t) __builtin_aarch64_get_dregxiv2si (__o, 2); + ret.val[3] = (uint32x2_t) __builtin_aarch64_get_dregxiv2si (__o, 3); + return ret; +} + +__extension__ static __inline float32x2x4_t __attribute__ ((__always_inline__)) +vld4_f32 (const float32_t * __a) +{ + float32x2x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v2sf ((const __builtin_aarch64_simd_sf *) __a); + ret.val[0] = (float32x2_t) __builtin_aarch64_get_dregxiv2sf (__o, 0); + ret.val[1] = (float32x2_t) __builtin_aarch64_get_dregxiv2sf (__o, 1); + ret.val[2] = (float32x2_t) __builtin_aarch64_get_dregxiv2sf (__o, 2); + ret.val[3] = (float32x2_t) __builtin_aarch64_get_dregxiv2sf (__o, 3); + return ret; +} + +__extension__ static __inline int8x16x4_t __attribute__ ((__always_inline__)) +vld4q_s8 (const int8_t * __a) +{ + int8x16x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v16qi ((const __builtin_aarch64_simd_qi *) __a); + ret.val[0] = (int8x16_t) __builtin_aarch64_get_qregxiv16qi (__o, 0); + ret.val[1] = (int8x16_t) __builtin_aarch64_get_qregxiv16qi (__o, 1); + ret.val[2] = (int8x16_t) __builtin_aarch64_get_qregxiv16qi (__o, 2); + ret.val[3] = (int8x16_t) __builtin_aarch64_get_qregxiv16qi (__o, 3); + return ret; +} + +__extension__ static __inline poly8x16x4_t __attribute__ ((__always_inline__)) +vld4q_p8 (const poly8_t * __a) +{ + poly8x16x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v16qi ((const __builtin_aarch64_simd_qi *) __a); + ret.val[0] = (poly8x16_t) __builtin_aarch64_get_qregxiv16qi (__o, 0); + ret.val[1] = (poly8x16_t) __builtin_aarch64_get_qregxiv16qi (__o, 1); + ret.val[2] = (poly8x16_t) __builtin_aarch64_get_qregxiv16qi (__o, 2); + ret.val[3] = (poly8x16_t) __builtin_aarch64_get_qregxiv16qi (__o, 3); + return ret; +} + +__extension__ static __inline int16x8x4_t __attribute__ ((__always_inline__)) +vld4q_s16 (const int16_t * __a) +{ + int16x8x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v8hi ((const __builtin_aarch64_simd_hi *) __a); + ret.val[0] = (int16x8_t) __builtin_aarch64_get_qregxiv8hi (__o, 0); + ret.val[1] = (int16x8_t) __builtin_aarch64_get_qregxiv8hi (__o, 1); + ret.val[2] = (int16x8_t) __builtin_aarch64_get_qregxiv8hi (__o, 2); + ret.val[3] = (int16x8_t) __builtin_aarch64_get_qregxiv8hi (__o, 3); + return ret; +} + +__extension__ static __inline poly16x8x4_t __attribute__ ((__always_inline__)) +vld4q_p16 (const poly16_t * __a) +{ + poly16x8x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v8hi ((const __builtin_aarch64_simd_hi *) __a); + ret.val[0] = (poly16x8_t) __builtin_aarch64_get_qregxiv8hi (__o, 0); + ret.val[1] = (poly16x8_t) __builtin_aarch64_get_qregxiv8hi (__o, 1); + ret.val[2] = (poly16x8_t) __builtin_aarch64_get_qregxiv8hi (__o, 2); + ret.val[3] = (poly16x8_t) __builtin_aarch64_get_qregxiv8hi (__o, 3); + return ret; +} + +__extension__ static __inline int32x4x4_t __attribute__ ((__always_inline__)) +vld4q_s32 (const int32_t * __a) +{ + int32x4x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v4si ((const __builtin_aarch64_simd_si *) __a); + ret.val[0] = (int32x4_t) __builtin_aarch64_get_qregxiv4si (__o, 0); + ret.val[1] = (int32x4_t) __builtin_aarch64_get_qregxiv4si (__o, 1); + ret.val[2] = (int32x4_t) __builtin_aarch64_get_qregxiv4si (__o, 2); + ret.val[3] = (int32x4_t) __builtin_aarch64_get_qregxiv4si (__o, 3); + return ret; +} + +__extension__ static __inline int64x2x4_t __attribute__ ((__always_inline__)) +vld4q_s64 (const int64_t * __a) +{ + int64x2x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v2di ((const __builtin_aarch64_simd_di *) __a); + ret.val[0] = (int64x2_t) __builtin_aarch64_get_qregxiv2di (__o, 0); + ret.val[1] = (int64x2_t) __builtin_aarch64_get_qregxiv2di (__o, 1); + ret.val[2] = (int64x2_t) __builtin_aarch64_get_qregxiv2di (__o, 2); + ret.val[3] = (int64x2_t) __builtin_aarch64_get_qregxiv2di (__o, 3); + return ret; +} + +__extension__ static __inline uint8x16x4_t __attribute__ ((__always_inline__)) +vld4q_u8 (const uint8_t * __a) +{ + uint8x16x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v16qi ((const __builtin_aarch64_simd_qi *) __a); + ret.val[0] = (uint8x16_t) __builtin_aarch64_get_qregxiv16qi (__o, 0); + ret.val[1] = (uint8x16_t) __builtin_aarch64_get_qregxiv16qi (__o, 1); + ret.val[2] = (uint8x16_t) __builtin_aarch64_get_qregxiv16qi (__o, 2); + ret.val[3] = (uint8x16_t) __builtin_aarch64_get_qregxiv16qi (__o, 3); + return ret; +} + +__extension__ static __inline uint16x8x4_t __attribute__ ((__always_inline__)) +vld4q_u16 (const uint16_t * __a) +{ + uint16x8x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v8hi ((const __builtin_aarch64_simd_hi *) __a); + ret.val[0] = (uint16x8_t) __builtin_aarch64_get_qregxiv8hi (__o, 0); + ret.val[1] = (uint16x8_t) __builtin_aarch64_get_qregxiv8hi (__o, 1); + ret.val[2] = (uint16x8_t) __builtin_aarch64_get_qregxiv8hi (__o, 2); + ret.val[3] = (uint16x8_t) __builtin_aarch64_get_qregxiv8hi (__o, 3); + return ret; +} + +__extension__ static __inline uint32x4x4_t __attribute__ ((__always_inline__)) +vld4q_u32 (const uint32_t * __a) +{ + uint32x4x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v4si ((const __builtin_aarch64_simd_si *) __a); + ret.val[0] = (uint32x4_t) __builtin_aarch64_get_qregxiv4si (__o, 0); + ret.val[1] = (uint32x4_t) __builtin_aarch64_get_qregxiv4si (__o, 1); + ret.val[2] = (uint32x4_t) __builtin_aarch64_get_qregxiv4si (__o, 2); + ret.val[3] = (uint32x4_t) __builtin_aarch64_get_qregxiv4si (__o, 3); + return ret; +} + +__extension__ static __inline uint64x2x4_t __attribute__ ((__always_inline__)) +vld4q_u64 (const uint64_t * __a) +{ + uint64x2x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v2di ((const __builtin_aarch64_simd_di *) __a); + ret.val[0] = (uint64x2_t) __builtin_aarch64_get_qregxiv2di (__o, 0); + ret.val[1] = (uint64x2_t) __builtin_aarch64_get_qregxiv2di (__o, 1); + ret.val[2] = (uint64x2_t) __builtin_aarch64_get_qregxiv2di (__o, 2); + ret.val[3] = (uint64x2_t) __builtin_aarch64_get_qregxiv2di (__o, 3); + return ret; +} + +__extension__ static __inline float32x4x4_t __attribute__ ((__always_inline__)) +vld4q_f32 (const float32_t * __a) +{ + float32x4x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v4sf ((const __builtin_aarch64_simd_sf *) __a); + ret.val[0] = (float32x4_t) __builtin_aarch64_get_qregxiv4sf (__o, 0); + ret.val[1] = (float32x4_t) __builtin_aarch64_get_qregxiv4sf (__o, 1); + ret.val[2] = (float32x4_t) __builtin_aarch64_get_qregxiv4sf (__o, 2); + ret.val[3] = (float32x4_t) __builtin_aarch64_get_qregxiv4sf (__o, 3); + return ret; +} + +__extension__ static __inline float64x2x4_t __attribute__ ((__always_inline__)) +vld4q_f64 (const float64_t * __a) +{ + float64x2x4_t ret; + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_ld4v2df ((const __builtin_aarch64_simd_df *) __a); + ret.val[0] = (float64x2_t) __builtin_aarch64_get_qregxiv2df (__o, 0); + ret.val[1] = (float64x2_t) __builtin_aarch64_get_qregxiv2df (__o, 1); + ret.val[2] = (float64x2_t) __builtin_aarch64_get_qregxiv2df (__o, 2); + ret.val[3] = (float64x2_t) __builtin_aarch64_get_qregxiv2df (__o, 3); + return ret; +} + +/* vmax */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vmax_f32 (float32x2_t __a, float32x2_t __b) +{ + return __builtin_aarch64_smax_nanv2sf (__a, __b); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vmax_s8 (int8x8_t __a, int8x8_t __b) +{ + return __builtin_aarch64_smaxv8qi (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vmax_s16 (int16x4_t __a, int16x4_t __b) +{ + return __builtin_aarch64_smaxv4hi (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vmax_s32 (int32x2_t __a, int32x2_t __b) +{ + return __builtin_aarch64_smaxv2si (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vmax_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_umaxv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vmax_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_umaxv4hi ((int16x4_t) __a, + (int16x4_t) __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vmax_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_umaxv2si ((int32x2_t) __a, + (int32x2_t) __b); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vmaxq_f32 (float32x4_t __a, float32x4_t __b) +{ + return __builtin_aarch64_smax_nanv4sf (__a, __b); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vmaxq_f64 (float64x2_t __a, float64x2_t __b) +{ + return __builtin_aarch64_smax_nanv2df (__a, __b); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vmaxq_s8 (int8x16_t __a, int8x16_t __b) +{ + return __builtin_aarch64_smaxv16qi (__a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmaxq_s16 (int16x8_t __a, int16x8_t __b) +{ + return __builtin_aarch64_smaxv8hi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmaxq_s32 (int32x4_t __a, int32x4_t __b) +{ + return __builtin_aarch64_smaxv4si (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vmaxq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_umaxv16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmaxq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_umaxv8hi ((int16x8_t) __a, + (int16x8_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmaxq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_umaxv4si ((int32x4_t) __a, + (int32x4_t) __b); +} + +/* vmaxnm */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vmaxnm_f32 (float32x2_t __a, float32x2_t __b) +{ + return __builtin_aarch64_smaxv2sf (__a, __b); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vmaxnmq_f32 (float32x4_t __a, float32x4_t __b) +{ + return __builtin_aarch64_smaxv4sf (__a, __b); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vmaxnmq_f64 (float64x2_t __a, float64x2_t __b) +{ + return __builtin_aarch64_smaxv2df (__a, __b); +} + +/* vmaxv */ + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vmaxv_f32 (float32x2_t __a) +{ + return vget_lane_f32 (__builtin_aarch64_reduc_smax_nan_v2sf (__a), + 0); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vmaxv_s8 (int8x8_t __a) +{ + return vget_lane_s8 (__builtin_aarch64_reduc_smax_v8qi (__a), 0); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vmaxv_s16 (int16x4_t __a) +{ + return vget_lane_s16 (__builtin_aarch64_reduc_smax_v4hi (__a), 0); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vmaxv_s32 (int32x2_t __a) +{ + return vget_lane_s32 (__builtin_aarch64_reduc_smax_v2si (__a), 0); +} + +__extension__ static __inline uint8_t __attribute__ ((__always_inline__)) +vmaxv_u8 (uint8x8_t __a) +{ + return vget_lane_u8 ((uint8x8_t) + __builtin_aarch64_reduc_umax_v8qi ((int8x8_t) __a), + 0); +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vmaxv_u16 (uint16x4_t __a) +{ + return vget_lane_u16 ((uint16x4_t) + __builtin_aarch64_reduc_umax_v4hi ((int16x4_t) __a), + 0); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vmaxv_u32 (uint32x2_t __a) +{ + return vget_lane_u32 ((uint32x2_t) + __builtin_aarch64_reduc_umax_v2si ((int32x2_t) __a), + 0); +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vmaxvq_f32 (float32x4_t __a) +{ + return vgetq_lane_f32 (__builtin_aarch64_reduc_smax_nan_v4sf (__a), + 0); +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vmaxvq_f64 (float64x2_t __a) +{ + return vgetq_lane_f64 (__builtin_aarch64_reduc_smax_nan_v2df (__a), + 0); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vmaxvq_s8 (int8x16_t __a) +{ + return vgetq_lane_s8 (__builtin_aarch64_reduc_smax_v16qi (__a), 0); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vmaxvq_s16 (int16x8_t __a) +{ + return vgetq_lane_s16 (__builtin_aarch64_reduc_smax_v8hi (__a), 0); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vmaxvq_s32 (int32x4_t __a) +{ + return vgetq_lane_s32 (__builtin_aarch64_reduc_smax_v4si (__a), 0); +} + +__extension__ static __inline uint8_t __attribute__ ((__always_inline__)) +vmaxvq_u8 (uint8x16_t __a) +{ + return vgetq_lane_u8 ((uint8x16_t) + __builtin_aarch64_reduc_umax_v16qi ((int8x16_t) __a), + 0); +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vmaxvq_u16 (uint16x8_t __a) +{ + return vgetq_lane_u16 ((uint16x8_t) + __builtin_aarch64_reduc_umax_v8hi ((int16x8_t) __a), + 0); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vmaxvq_u32 (uint32x4_t __a) +{ + return vgetq_lane_u32 ((uint32x4_t) + __builtin_aarch64_reduc_umax_v4si ((int32x4_t) __a), + 0); +} + +/* vmaxnmv */ + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vmaxnmv_f32 (float32x2_t __a) +{ + return vget_lane_f32 (__builtin_aarch64_reduc_smax_v2sf (__a), + 0); +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vmaxnmvq_f32 (float32x4_t __a) +{ + return vgetq_lane_f32 (__builtin_aarch64_reduc_smax_v4sf (__a), 0); +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vmaxnmvq_f64 (float64x2_t __a) +{ + return vgetq_lane_f64 (__builtin_aarch64_reduc_smax_v2df (__a), 0); +} + +/* vmin */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vmin_f32 (float32x2_t __a, float32x2_t __b) +{ + return __builtin_aarch64_smin_nanv2sf (__a, __b); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vmin_s8 (int8x8_t __a, int8x8_t __b) +{ + return __builtin_aarch64_sminv8qi (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vmin_s16 (int16x4_t __a, int16x4_t __b) +{ + return __builtin_aarch64_sminv4hi (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vmin_s32 (int32x2_t __a, int32x2_t __b) +{ + return __builtin_aarch64_sminv2si (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vmin_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_uminv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vmin_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_uminv4hi ((int16x4_t) __a, + (int16x4_t) __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vmin_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_uminv2si ((int32x2_t) __a, + (int32x2_t) __b); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vminq_f32 (float32x4_t __a, float32x4_t __b) +{ + return __builtin_aarch64_smin_nanv4sf (__a, __b); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vminq_f64 (float64x2_t __a, float64x2_t __b) +{ + return __builtin_aarch64_smin_nanv2df (__a, __b); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vminq_s8 (int8x16_t __a, int8x16_t __b) +{ + return __builtin_aarch64_sminv16qi (__a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vminq_s16 (int16x8_t __a, int16x8_t __b) +{ + return __builtin_aarch64_sminv8hi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vminq_s32 (int32x4_t __a, int32x4_t __b) +{ + return __builtin_aarch64_sminv4si (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vminq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_uminv16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vminq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_uminv8hi ((int16x8_t) __a, + (int16x8_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vminq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_uminv4si ((int32x4_t) __a, + (int32x4_t) __b); +} + +/* vminnm */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vminnm_f32 (float32x2_t __a, float32x2_t __b) +{ + return __builtin_aarch64_sminv2sf (__a, __b); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vminnmq_f32 (float32x4_t __a, float32x4_t __b) +{ + return __builtin_aarch64_sminv4sf (__a, __b); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vminnmq_f64 (float64x2_t __a, float64x2_t __b) +{ + return __builtin_aarch64_sminv2df (__a, __b); +} + +/* vminv */ + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vminv_f32 (float32x2_t __a) +{ + return vget_lane_f32 (__builtin_aarch64_reduc_smin_nan_v2sf (__a), + 0); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vminv_s8 (int8x8_t __a) +{ + return vget_lane_s8 (__builtin_aarch64_reduc_smin_v8qi (__a), + 0); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vminv_s16 (int16x4_t __a) +{ + return vget_lane_s16 (__builtin_aarch64_reduc_smin_v4hi (__a), 0); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vminv_s32 (int32x2_t __a) +{ + return vget_lane_s32 (__builtin_aarch64_reduc_smin_v2si (__a), 0); +} + +__extension__ static __inline uint8_t __attribute__ ((__always_inline__)) +vminv_u8 (uint8x8_t __a) +{ + return vget_lane_u8 ((uint8x8_t) + __builtin_aarch64_reduc_umin_v8qi ((int8x8_t) __a), + 0); +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vminv_u16 (uint16x4_t __a) +{ + return vget_lane_u16 ((uint16x4_t) + __builtin_aarch64_reduc_umin_v4hi ((int16x4_t) __a), + 0); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vminv_u32 (uint32x2_t __a) +{ + return vget_lane_u32 ((uint32x2_t) + __builtin_aarch64_reduc_umin_v2si ((int32x2_t) __a), + 0); +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vminvq_f32 (float32x4_t __a) +{ + return vgetq_lane_f32 (__builtin_aarch64_reduc_smin_nan_v4sf (__a), + 0); +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vminvq_f64 (float64x2_t __a) +{ + return vgetq_lane_f64 (__builtin_aarch64_reduc_smin_nan_v2df (__a), + 0); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vminvq_s8 (int8x16_t __a) +{ + return vgetq_lane_s8 (__builtin_aarch64_reduc_smin_v16qi (__a), 0); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vminvq_s16 (int16x8_t __a) +{ + return vgetq_lane_s16 (__builtin_aarch64_reduc_smin_v8hi (__a), 0); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vminvq_s32 (int32x4_t __a) +{ + return vgetq_lane_s32 (__builtin_aarch64_reduc_smin_v4si (__a), 0); +} + +__extension__ static __inline uint8_t __attribute__ ((__always_inline__)) +vminvq_u8 (uint8x16_t __a) +{ + return vgetq_lane_u8 ((uint8x16_t) + __builtin_aarch64_reduc_umin_v16qi ((int8x16_t) __a), + 0); +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vminvq_u16 (uint16x8_t __a) +{ + return vgetq_lane_u16 ((uint16x8_t) + __builtin_aarch64_reduc_umin_v8hi ((int16x8_t) __a), + 0); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vminvq_u32 (uint32x4_t __a) +{ + return vgetq_lane_u32 ((uint32x4_t) + __builtin_aarch64_reduc_umin_v4si ((int32x4_t) __a), + 0); +} + +/* vminnmv */ + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vminnmv_f32 (float32x2_t __a) +{ + return vget_lane_f32 (__builtin_aarch64_reduc_smin_v2sf (__a), 0); +} + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vminnmvq_f32 (float32x4_t __a) +{ + return vgetq_lane_f32 (__builtin_aarch64_reduc_smin_v4sf (__a), 0); +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vminnmvq_f64 (float64x2_t __a) +{ + return vgetq_lane_f64 (__builtin_aarch64_reduc_smin_v2df (__a), 0); +} + +/* vmla */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vmla_f32 (float32x2_t a, float32x2_t b, float32x2_t c) +{ + return a + b * c; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vmlaq_f32 (float32x4_t a, float32x4_t b, float32x4_t c) +{ + return a + b * c; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vmlaq_f64 (float64x2_t a, float64x2_t b, float64x2_t c) +{ + return a + b * c; +} + +/* vmla_lane */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vmla_lane_f32 (float32x2_t __a, float32x2_t __b, + float32x2_t __c, const int __lane) +{ + return (__a + (__b * __aarch64_vget_lane_f32 (__c, __lane))); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vmla_lane_s16 (int16x4_t __a, int16x4_t __b, + int16x4_t __c, const int __lane) +{ + return (__a + (__b * __aarch64_vget_lane_s16 (__c, __lane))); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vmla_lane_s32 (int32x2_t __a, int32x2_t __b, + int32x2_t __c, const int __lane) +{ + return (__a + (__b * __aarch64_vget_lane_s32 (__c, __lane))); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vmla_lane_u16 (uint16x4_t __a, uint16x4_t __b, + uint16x4_t __c, const int __lane) +{ + return (__a + (__b * __aarch64_vget_lane_u16 (__c, __lane))); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vmla_lane_u32 (uint32x2_t __a, uint32x2_t __b, + uint32x2_t __c, const int __lane) +{ + return (__a + (__b * __aarch64_vget_lane_u32 (__c, __lane))); +} + +/* vmla_laneq */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vmla_laneq_f32 (float32x2_t __a, float32x2_t __b, + float32x4_t __c, const int __lane) +{ + return (__a + (__b * __aarch64_vgetq_lane_f32 (__c, __lane))); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vmla_laneq_s16 (int16x4_t __a, int16x4_t __b, + int16x8_t __c, const int __lane) +{ + return (__a + (__b * __aarch64_vgetq_lane_s16 (__c, __lane))); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vmla_laneq_s32 (int32x2_t __a, int32x2_t __b, + int32x4_t __c, const int __lane) +{ + return (__a + (__b * __aarch64_vgetq_lane_s32 (__c, __lane))); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vmla_laneq_u16 (uint16x4_t __a, uint16x4_t __b, + uint16x8_t __c, const int __lane) +{ + return (__a + (__b * __aarch64_vgetq_lane_u16 (__c, __lane))); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vmla_laneq_u32 (uint32x2_t __a, uint32x2_t __b, + uint32x4_t __c, const int __lane) +{ + return (__a + (__b * __aarch64_vgetq_lane_u32 (__c, __lane))); +} + +/* vmlaq_lane */ + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vmlaq_lane_f32 (float32x4_t __a, float32x4_t __b, + float32x2_t __c, const int __lane) +{ + return (__a + (__b * __aarch64_vget_lane_f32 (__c, __lane))); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmlaq_lane_s16 (int16x8_t __a, int16x8_t __b, + int16x4_t __c, const int __lane) +{ + return (__a + (__b * __aarch64_vget_lane_s16 (__c, __lane))); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmlaq_lane_s32 (int32x4_t __a, int32x4_t __b, + int32x2_t __c, const int __lane) +{ + return (__a + (__b * __aarch64_vget_lane_s32 (__c, __lane))); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmlaq_lane_u16 (uint16x8_t __a, uint16x8_t __b, + uint16x4_t __c, const int __lane) +{ + return (__a + (__b * __aarch64_vget_lane_u16 (__c, __lane))); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmlaq_lane_u32 (uint32x4_t __a, uint32x4_t __b, + uint32x2_t __c, const int __lane) +{ + return (__a + (__b * __aarch64_vget_lane_u32 (__c, __lane))); +} + + /* vmlaq_laneq */ + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vmlaq_laneq_f32 (float32x4_t __a, float32x4_t __b, + float32x4_t __c, const int __lane) +{ + return (__a + (__b * __aarch64_vgetq_lane_f32 (__c, __lane))); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmlaq_laneq_s16 (int16x8_t __a, int16x8_t __b, + int16x8_t __c, const int __lane) +{ + return (__a + (__b * __aarch64_vgetq_lane_s16 (__c, __lane))); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmlaq_laneq_s32 (int32x4_t __a, int32x4_t __b, + int32x4_t __c, const int __lane) +{ + return (__a + (__b * __aarch64_vgetq_lane_s32 (__c, __lane))); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmlaq_laneq_u16 (uint16x8_t __a, uint16x8_t __b, + uint16x8_t __c, const int __lane) +{ + return (__a + (__b * __aarch64_vgetq_lane_u16 (__c, __lane))); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmlaq_laneq_u32 (uint32x4_t __a, uint32x4_t __b, + uint32x4_t __c, const int __lane) +{ + return (__a + (__b * __aarch64_vgetq_lane_u32 (__c, __lane))); +} + +/* vmls */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vmls_f32 (float32x2_t a, float32x2_t b, float32x2_t c) +{ + return a - b * c; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vmlsq_f32 (float32x4_t a, float32x4_t b, float32x4_t c) +{ + return a - b * c; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vmlsq_f64 (float64x2_t a, float64x2_t b, float64x2_t c) +{ + return a - b * c; +} + +/* vmls_lane */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vmls_lane_f32 (float32x2_t __a, float32x2_t __b, + float32x2_t __c, const int __lane) +{ + return (__a - (__b * __aarch64_vget_lane_f32 (__c, __lane))); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vmls_lane_s16 (int16x4_t __a, int16x4_t __b, + int16x4_t __c, const int __lane) +{ + return (__a - (__b * __aarch64_vget_lane_s16 (__c, __lane))); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vmls_lane_s32 (int32x2_t __a, int32x2_t __b, + int32x2_t __c, const int __lane) +{ + return (__a - (__b * __aarch64_vget_lane_s32 (__c, __lane))); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vmls_lane_u16 (uint16x4_t __a, uint16x4_t __b, + uint16x4_t __c, const int __lane) +{ + return (__a - (__b * __aarch64_vget_lane_u16 (__c, __lane))); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vmls_lane_u32 (uint32x2_t __a, uint32x2_t __b, + uint32x2_t __c, const int __lane) +{ + return (__a - (__b * __aarch64_vget_lane_u32 (__c, __lane))); +} + +/* vmls_laneq */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vmls_laneq_f32 (float32x2_t __a, float32x2_t __b, + float32x4_t __c, const int __lane) +{ + return (__a - (__b * __aarch64_vgetq_lane_f32 (__c, __lane))); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vmls_laneq_s16 (int16x4_t __a, int16x4_t __b, + int16x8_t __c, const int __lane) +{ + return (__a - (__b * __aarch64_vgetq_lane_s16 (__c, __lane))); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vmls_laneq_s32 (int32x2_t __a, int32x2_t __b, + int32x4_t __c, const int __lane) +{ + return (__a - (__b * __aarch64_vgetq_lane_s32 (__c, __lane))); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vmls_laneq_u16 (uint16x4_t __a, uint16x4_t __b, + uint16x8_t __c, const int __lane) +{ + return (__a - (__b * __aarch64_vgetq_lane_u16 (__c, __lane))); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vmls_laneq_u32 (uint32x2_t __a, uint32x2_t __b, + uint32x4_t __c, const int __lane) +{ + return (__a - (__b * __aarch64_vgetq_lane_u32 (__c, __lane))); +} + +/* vmlsq_lane */ + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vmlsq_lane_f32 (float32x4_t __a, float32x4_t __b, + float32x2_t __c, const int __lane) +{ + return (__a - (__b * __aarch64_vget_lane_f32 (__c, __lane))); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmlsq_lane_s16 (int16x8_t __a, int16x8_t __b, + int16x4_t __c, const int __lane) +{ + return (__a - (__b * __aarch64_vget_lane_s16 (__c, __lane))); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmlsq_lane_s32 (int32x4_t __a, int32x4_t __b, + int32x2_t __c, const int __lane) +{ + return (__a - (__b * __aarch64_vget_lane_s32 (__c, __lane))); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmlsq_lane_u16 (uint16x8_t __a, uint16x8_t __b, + uint16x4_t __c, const int __lane) +{ + return (__a - (__b * __aarch64_vget_lane_u16 (__c, __lane))); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmlsq_lane_u32 (uint32x4_t __a, uint32x4_t __b, + uint32x2_t __c, const int __lane) +{ + return (__a - (__b * __aarch64_vget_lane_u32 (__c, __lane))); +} + + /* vmlsq_laneq */ + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vmlsq_laneq_f32 (float32x4_t __a, float32x4_t __b, + float32x4_t __c, const int __lane) +{ + return (__a - (__b * __aarch64_vgetq_lane_f32 (__c, __lane))); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmlsq_laneq_s16 (int16x8_t __a, int16x8_t __b, + int16x8_t __c, const int __lane) +{ + return (__a - (__b * __aarch64_vgetq_lane_s16 (__c, __lane))); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmlsq_laneq_s32 (int32x4_t __a, int32x4_t __b, + int32x4_t __c, const int __lane) +{ + return (__a - (__b * __aarch64_vgetq_lane_s32 (__c, __lane))); +} +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmlsq_laneq_u16 (uint16x8_t __a, uint16x8_t __b, + uint16x8_t __c, const int __lane) +{ + return (__a - (__b * __aarch64_vgetq_lane_u16 (__c, __lane))); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmlsq_laneq_u32 (uint32x4_t __a, uint32x4_t __b, + uint32x4_t __c, const int __lane) +{ + return (__a - (__b * __aarch64_vgetq_lane_u32 (__c, __lane))); +} + +/* vmov_n_ */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vmov_n_f32 (float32_t __a) +{ + return vdup_n_f32 (__a); +} + +__extension__ static __inline float64x1_t __attribute__ ((__always_inline__)) +vmov_n_f64 (float64_t __a) +{ + return __a; +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vmov_n_p8 (poly8_t __a) +{ + return vdup_n_p8 (__a); +} + +__extension__ static __inline poly16x4_t __attribute__ ((__always_inline__)) +vmov_n_p16 (poly16_t __a) +{ + return vdup_n_p16 (__a); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vmov_n_s8 (int8_t __a) +{ + return vdup_n_s8 (__a); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vmov_n_s16 (int16_t __a) +{ + return vdup_n_s16 (__a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vmov_n_s32 (int32_t __a) +{ + return vdup_n_s32 (__a); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vmov_n_s64 (int64_t __a) +{ + return __a; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vmov_n_u8 (uint8_t __a) +{ + return vdup_n_u8 (__a); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vmov_n_u16 (uint16_t __a) +{ + return vdup_n_u16 (__a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vmov_n_u32 (uint32_t __a) +{ + return vdup_n_u32 (__a); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vmov_n_u64 (uint64_t __a) +{ + return __a; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vmovq_n_f32 (float32_t __a) +{ + return vdupq_n_f32 (__a); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vmovq_n_f64 (float64_t __a) +{ + return vdupq_n_f64 (__a); +} + +__extension__ static __inline poly8x16_t __attribute__ ((__always_inline__)) +vmovq_n_p8 (poly8_t __a) +{ + return vdupq_n_p8 (__a); +} + +__extension__ static __inline poly16x8_t __attribute__ ((__always_inline__)) +vmovq_n_p16 (poly16_t __a) +{ + return vdupq_n_p16 (__a); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vmovq_n_s8 (int8_t __a) +{ + return vdupq_n_s8 (__a); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmovq_n_s16 (int16_t __a) +{ + return vdupq_n_s16 (__a); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmovq_n_s32 (int32_t __a) +{ + return vdupq_n_s32 (__a); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vmovq_n_s64 (int64_t __a) +{ + return vdupq_n_s64 (__a); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vmovq_n_u8 (uint8_t __a) +{ + return vdupq_n_u8 (__a); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmovq_n_u16 (uint16_t __a) +{ + return vdupq_n_u16 (__a); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmovq_n_u32 (uint32_t __a) +{ + return vdupq_n_u32 (__a); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vmovq_n_u64 (uint64_t __a) +{ + return vdupq_n_u64 (__a); +} + +/* vmul_lane */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vmul_lane_f32 (float32x2_t __a, float32x2_t __b, const int __lane) +{ + return __a * __aarch64_vget_lane_f32 (__b, __lane); +} + +__extension__ static __inline float64x1_t __attribute__ ((__always_inline__)) +vmul_lane_f64 (float64x1_t __a, float64x1_t __b, const int __lane) +{ + return __a * __b; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vmul_lane_s16 (int16x4_t __a, int16x4_t __b, const int __lane) +{ + return __a * __aarch64_vget_lane_s16 (__b, __lane); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vmul_lane_s32 (int32x2_t __a, int32x2_t __b, const int __lane) +{ + return __a * __aarch64_vget_lane_s32 (__b, __lane); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vmul_lane_u16 (uint16x4_t __a, uint16x4_t __b, const int __lane) +{ + return __a * __aarch64_vget_lane_u16 (__b, __lane); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vmul_lane_u32 (uint32x2_t __a, uint32x2_t __b, const int __lane) +{ + return __a * __aarch64_vget_lane_u32 (__b, __lane); +} + +/* vmul_laneq */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vmul_laneq_f32 (float32x2_t __a, float32x4_t __b, const int __lane) +{ + return __a * __aarch64_vgetq_lane_f32 (__b, __lane); +} + +__extension__ static __inline float64x1_t __attribute__ ((__always_inline__)) +vmul_laneq_f64 (float64x1_t __a, float64x2_t __b, const int __lane) +{ + return __a * __aarch64_vgetq_lane_f64 (__b, __lane); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vmul_laneq_s16 (int16x4_t __a, int16x8_t __b, const int __lane) +{ + return __a * __aarch64_vgetq_lane_s16 (__b, __lane); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vmul_laneq_s32 (int32x2_t __a, int32x4_t __b, const int __lane) +{ + return __a * __aarch64_vgetq_lane_s32 (__b, __lane); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vmul_laneq_u16 (uint16x4_t __a, uint16x8_t __b, const int __lane) +{ + return __a * __aarch64_vgetq_lane_u16 (__b, __lane); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vmul_laneq_u32 (uint32x2_t __a, uint32x4_t __b, const int __lane) +{ + return __a * __aarch64_vgetq_lane_u32 (__b, __lane); +} + +/* vmulq_lane */ + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vmulq_lane_f32 (float32x4_t __a, float32x2_t __b, const int __lane) +{ + return __a * __aarch64_vget_lane_f32 (__b, __lane); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vmulq_lane_f64 (float64x2_t __a, float64x1_t __b, const int __lane) +{ + return __a * __b; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmulq_lane_s16 (int16x8_t __a, int16x4_t __b, const int __lane) +{ + return __a * __aarch64_vget_lane_s16 (__b, __lane); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmulq_lane_s32 (int32x4_t __a, int32x2_t __b, const int __lane) +{ + return __a * __aarch64_vget_lane_s32 (__b, __lane); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmulq_lane_u16 (uint16x8_t __a, uint16x4_t __b, const int __lane) +{ + return __a * __aarch64_vget_lane_u16 (__b, __lane); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmulq_lane_u32 (uint32x4_t __a, uint32x2_t __b, const int __lane) +{ + return __a * __aarch64_vget_lane_u32 (__b, __lane); +} + +/* vmulq_laneq */ + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vmulq_laneq_f32 (float32x4_t __a, float32x4_t __b, const int __lane) +{ + return __a * __aarch64_vgetq_lane_f32 (__b, __lane); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vmulq_laneq_f64 (float64x2_t __a, float64x2_t __b, const int __lane) +{ + return __a * __aarch64_vgetq_lane_f64 (__b, __lane); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vmulq_laneq_s16 (int16x8_t __a, int16x8_t __b, const int __lane) +{ + return __a * __aarch64_vgetq_lane_s16 (__b, __lane); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vmulq_laneq_s32 (int32x4_t __a, int32x4_t __b, const int __lane) +{ + return __a * __aarch64_vgetq_lane_s32 (__b, __lane); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vmulq_laneq_u16 (uint16x8_t __a, uint16x8_t __b, const int __lane) +{ + return __a * __aarch64_vgetq_lane_u16 (__b, __lane); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vmulq_laneq_u32 (uint32x4_t __a, uint32x4_t __b, const int __lane) +{ + return __a * __aarch64_vgetq_lane_u32 (__b, __lane); +} + +/* vneg */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vneg_f32 (float32x2_t __a) +{ + return -__a; +} + +__extension__ static __inline float64x1_t __attribute__ ((__always_inline__)) +vneg_f64 (float64x1_t __a) +{ + return -__a; +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vneg_s8 (int8x8_t __a) +{ + return -__a; +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vneg_s16 (int16x4_t __a) +{ + return -__a; +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vneg_s32 (int32x2_t __a) +{ + return -__a; +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vneg_s64 (int64x1_t __a) +{ + return -__a; +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vnegq_f32 (float32x4_t __a) +{ + return -__a; +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vnegq_f64 (float64x2_t __a) +{ + return -__a; +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vnegq_s8 (int8x16_t __a) +{ + return -__a; +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vnegq_s16 (int16x8_t __a) +{ + return -__a; +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vnegq_s32 (int32x4_t __a) +{ + return -__a; +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vnegq_s64 (int64x2_t __a) +{ + return -__a; +} + +/* vqabs */ + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqabsq_s64 (int64x2_t __a) +{ + return (int64x2_t) __builtin_aarch64_sqabsv2di (__a); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vqabsb_s8 (int8_t __a) +{ + return (int8_t) __builtin_aarch64_sqabsqi (__a); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vqabsh_s16 (int16_t __a) +{ + return (int16_t) __builtin_aarch64_sqabshi (__a); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqabss_s32 (int32_t __a) +{ + return (int32_t) __builtin_aarch64_sqabssi (__a); +} + +/* vqadd */ + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vqaddb_s8 (int8_t __a, int8_t __b) +{ + return (int8_t) __builtin_aarch64_sqaddqi (__a, __b); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vqaddh_s16 (int16_t __a, int16_t __b) +{ + return (int16_t) __builtin_aarch64_sqaddhi (__a, __b); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqadds_s32 (int32_t __a, int32_t __b) +{ + return (int32_t) __builtin_aarch64_sqaddsi (__a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vqaddd_s64 (int64x1_t __a, int64x1_t __b) +{ + return (int64x1_t) __builtin_aarch64_sqadddi (__a, __b); +} + +__extension__ static __inline uint8_t __attribute__ ((__always_inline__)) +vqaddb_u8 (uint8_t __a, uint8_t __b) +{ + return (uint8_t) __builtin_aarch64_uqaddqi (__a, __b); +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vqaddh_u16 (uint16_t __a, uint16_t __b) +{ + return (uint16_t) __builtin_aarch64_uqaddhi (__a, __b); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vqadds_u32 (uint32_t __a, uint32_t __b) +{ + return (uint32_t) __builtin_aarch64_uqaddsi (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vqaddd_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return (uint64x1_t) __builtin_aarch64_uqadddi (__a, __b); +} + +/* vqdmlal */ + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmlal_s16 (int32x4_t __a, int16x4_t __b, int16x4_t __c) +{ + return __builtin_aarch64_sqdmlalv4hi (__a, __b, __c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmlal_high_s16 (int32x4_t __a, int16x8_t __b, int16x8_t __c) +{ + return __builtin_aarch64_sqdmlal2v8hi (__a, __b, __c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmlal_high_lane_s16 (int32x4_t __a, int16x8_t __b, int16x4_t __c, + int const __d) +{ + return __builtin_aarch64_sqdmlal2_lanev8hi (__a, __b, __c, __d); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmlal_high_laneq_s16 (int32x4_t __a, int16x8_t __b, int16x8_t __c, + int const __d) +{ + return __builtin_aarch64_sqdmlal2_laneqv8hi (__a, __b, __c, __d); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmlal_high_n_s16 (int32x4_t __a, int16x8_t __b, int16_t __c) +{ + return __builtin_aarch64_sqdmlal2_nv8hi (__a, __b, __c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmlal_lane_s16 (int32x4_t __a, int16x4_t __b, int16x4_t __c, int const __d) +{ + return __builtin_aarch64_sqdmlal_lanev4hi (__a, __b, __c, __d); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmlal_laneq_s16 (int32x4_t __a, int16x4_t __b, int16x8_t __c, int const __d) +{ + return __builtin_aarch64_sqdmlal_laneqv4hi (__a, __b, __c, __d); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmlal_n_s16 (int32x4_t __a, int16x4_t __b, int16_t __c) +{ + return __builtin_aarch64_sqdmlal_nv4hi (__a, __b, __c); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmlal_s32 (int64x2_t __a, int32x2_t __b, int32x2_t __c) +{ + return __builtin_aarch64_sqdmlalv2si (__a, __b, __c); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmlal_high_s32 (int64x2_t __a, int32x4_t __b, int32x4_t __c) +{ + return __builtin_aarch64_sqdmlal2v4si (__a, __b, __c); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmlal_high_lane_s32 (int64x2_t __a, int32x4_t __b, int32x2_t __c, + int const __d) +{ + return __builtin_aarch64_sqdmlal2_lanev4si (__a, __b, __c, __d); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmlal_high_laneq_s32 (int64x2_t __a, int32x4_t __b, int32x4_t __c, + int const __d) +{ + return __builtin_aarch64_sqdmlal2_laneqv4si (__a, __b, __c, __d); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmlal_high_n_s32 (int64x2_t __a, int32x4_t __b, int32_t __c) +{ + return __builtin_aarch64_sqdmlal2_nv4si (__a, __b, __c); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmlal_lane_s32 (int64x2_t __a, int32x2_t __b, int32x2_t __c, int const __d) +{ + return __builtin_aarch64_sqdmlal_lanev2si (__a, __b, __c, __d); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmlal_laneq_s32 (int64x2_t __a, int32x2_t __b, int32x4_t __c, int const __d) +{ + return __builtin_aarch64_sqdmlal_laneqv2si (__a, __b, __c, __d); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmlal_n_s32 (int64x2_t __a, int32x2_t __b, int32_t __c) +{ + return __builtin_aarch64_sqdmlal_nv2si (__a, __b, __c); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqdmlalh_s16 (int32_t __a, int16_t __b, int16_t __c) +{ + return __builtin_aarch64_sqdmlalhi (__a, __b, __c); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqdmlalh_lane_s16 (int32_t __a, int16_t __b, int16x4_t __c, const int __d) +{ + return __builtin_aarch64_sqdmlal_lanehi (__a, __b, __c, __d); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vqdmlals_s32 (int64x1_t __a, int32_t __b, int32_t __c) +{ + return __builtin_aarch64_sqdmlalsi (__a, __b, __c); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vqdmlals_lane_s32 (int64x1_t __a, int32_t __b, int32x2_t __c, const int __d) +{ + return __builtin_aarch64_sqdmlal_lanesi (__a, __b, __c, __d); +} + +/* vqdmlsl */ + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmlsl_s16 (int32x4_t __a, int16x4_t __b, int16x4_t __c) +{ + return __builtin_aarch64_sqdmlslv4hi (__a, __b, __c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmlsl_high_s16 (int32x4_t __a, int16x8_t __b, int16x8_t __c) +{ + return __builtin_aarch64_sqdmlsl2v8hi (__a, __b, __c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmlsl_high_lane_s16 (int32x4_t __a, int16x8_t __b, int16x4_t __c, + int const __d) +{ + return __builtin_aarch64_sqdmlsl2_lanev8hi (__a, __b, __c, __d); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmlsl_high_laneq_s16 (int32x4_t __a, int16x8_t __b, int16x8_t __c, + int const __d) +{ + return __builtin_aarch64_sqdmlsl2_laneqv8hi (__a, __b, __c, __d); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmlsl_high_n_s16 (int32x4_t __a, int16x8_t __b, int16_t __c) +{ + return __builtin_aarch64_sqdmlsl2_nv8hi (__a, __b, __c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmlsl_lane_s16 (int32x4_t __a, int16x4_t __b, int16x4_t __c, int const __d) +{ + return __builtin_aarch64_sqdmlsl_lanev4hi (__a, __b, __c, __d); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmlsl_laneq_s16 (int32x4_t __a, int16x4_t __b, int16x8_t __c, int const __d) +{ + return __builtin_aarch64_sqdmlsl_laneqv4hi (__a, __b, __c, __d); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmlsl_n_s16 (int32x4_t __a, int16x4_t __b, int16_t __c) +{ + return __builtin_aarch64_sqdmlsl_nv4hi (__a, __b, __c); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmlsl_s32 (int64x2_t __a, int32x2_t __b, int32x2_t __c) +{ + return __builtin_aarch64_sqdmlslv2si (__a, __b, __c); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmlsl_high_s32 (int64x2_t __a, int32x4_t __b, int32x4_t __c) +{ + return __builtin_aarch64_sqdmlsl2v4si (__a, __b, __c); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmlsl_high_lane_s32 (int64x2_t __a, int32x4_t __b, int32x2_t __c, + int const __d) +{ + return __builtin_aarch64_sqdmlsl2_lanev4si (__a, __b, __c, __d); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmlsl_high_laneq_s32 (int64x2_t __a, int32x4_t __b, int32x4_t __c, + int const __d) +{ + return __builtin_aarch64_sqdmlsl2_laneqv4si (__a, __b, __c, __d); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmlsl_high_n_s32 (int64x2_t __a, int32x4_t __b, int32_t __c) +{ + return __builtin_aarch64_sqdmlsl2_nv4si (__a, __b, __c); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmlsl_lane_s32 (int64x2_t __a, int32x2_t __b, int32x2_t __c, int const __d) +{ + return __builtin_aarch64_sqdmlsl_lanev2si (__a, __b, __c, __d); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmlsl_laneq_s32 (int64x2_t __a, int32x2_t __b, int32x4_t __c, int const __d) +{ + return __builtin_aarch64_sqdmlsl_laneqv2si (__a, __b, __c, __d); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmlsl_n_s32 (int64x2_t __a, int32x2_t __b, int32_t __c) +{ + return __builtin_aarch64_sqdmlsl_nv2si (__a, __b, __c); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqdmlslh_s16 (int32_t __a, int16_t __b, int16_t __c) +{ + return __builtin_aarch64_sqdmlslhi (__a, __b, __c); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqdmlslh_lane_s16 (int32_t __a, int16_t __b, int16x4_t __c, const int __d) +{ + return __builtin_aarch64_sqdmlsl_lanehi (__a, __b, __c, __d); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vqdmlsls_s32 (int64x1_t __a, int32_t __b, int32_t __c) +{ + return __builtin_aarch64_sqdmlslsi (__a, __b, __c); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vqdmlsls_lane_s32 (int64x1_t __a, int32_t __b, int32x2_t __c, const int __d) +{ + return __builtin_aarch64_sqdmlsl_lanesi (__a, __b, __c, __d); +} + +/* vqdmulh */ + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vqdmulh_lane_s16 (int16x4_t __a, int16x4_t __b, const int __c) +{ + return __builtin_aarch64_sqdmulh_lanev4hi (__a, __b, __c); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vqdmulh_lane_s32 (int32x2_t __a, int32x2_t __b, const int __c) +{ + return __builtin_aarch64_sqdmulh_lanev2si (__a, __b, __c); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vqdmulhq_lane_s16 (int16x8_t __a, int16x4_t __b, const int __c) +{ + return __builtin_aarch64_sqdmulh_lanev8hi (__a, __b, __c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmulhq_lane_s32 (int32x4_t __a, int32x2_t __b, const int __c) +{ + return __builtin_aarch64_sqdmulh_lanev4si (__a, __b, __c); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vqdmulhh_s16 (int16_t __a, int16_t __b) +{ + return (int16_t) __builtin_aarch64_sqdmulhhi (__a, __b); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vqdmulhh_lane_s16 (int16_t __a, int16x4_t __b, const int __c) +{ + return __builtin_aarch64_sqdmulh_lanehi (__a, __b, __c); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqdmulhs_s32 (int32_t __a, int32_t __b) +{ + return (int32_t) __builtin_aarch64_sqdmulhsi (__a, __b); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqdmulhs_lane_s32 (int32_t __a, int32x2_t __b, const int __c) +{ + return __builtin_aarch64_sqdmulh_lanesi (__a, __b, __c); +} + +/* vqdmull */ + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmull_s16 (int16x4_t __a, int16x4_t __b) +{ + return __builtin_aarch64_sqdmullv4hi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmull_high_s16 (int16x8_t __a, int16x8_t __b) +{ + return __builtin_aarch64_sqdmull2v8hi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmull_high_lane_s16 (int16x8_t __a, int16x4_t __b, int const __c) +{ + return __builtin_aarch64_sqdmull2_lanev8hi (__a, __b,__c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmull_high_laneq_s16 (int16x8_t __a, int16x8_t __b, int const __c) +{ + return __builtin_aarch64_sqdmull2_laneqv8hi (__a, __b,__c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmull_high_n_s16 (int16x8_t __a, int16_t __b) +{ + return __builtin_aarch64_sqdmull2_nv8hi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmull_lane_s16 (int16x4_t __a, int16x4_t __b, int const __c) +{ + return __builtin_aarch64_sqdmull_lanev4hi (__a, __b, __c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmull_laneq_s16 (int16x4_t __a, int16x8_t __b, int const __c) +{ + return __builtin_aarch64_sqdmull_laneqv4hi (__a, __b, __c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqdmull_n_s16 (int16x4_t __a, int16_t __b) +{ + return __builtin_aarch64_sqdmull_nv4hi (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmull_s32 (int32x2_t __a, int32x2_t __b) +{ + return __builtin_aarch64_sqdmullv2si (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmull_high_s32 (int32x4_t __a, int32x4_t __b) +{ + return __builtin_aarch64_sqdmull2v4si (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmull_high_lane_s32 (int32x4_t __a, int32x2_t __b, int const __c) +{ + return __builtin_aarch64_sqdmull2_lanev4si (__a, __b, __c); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmull_high_laneq_s32 (int32x4_t __a, int32x4_t __b, int const __c) +{ + return __builtin_aarch64_sqdmull2_laneqv4si (__a, __b, __c); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmull_high_n_s32 (int32x4_t __a, int32_t __b) +{ + return __builtin_aarch64_sqdmull2_nv4si (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmull_lane_s32 (int32x2_t __a, int32x2_t __b, int const __c) +{ + return __builtin_aarch64_sqdmull_lanev2si (__a, __b, __c); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmull_laneq_s32 (int32x2_t __a, int32x4_t __b, int const __c) +{ + return __builtin_aarch64_sqdmull_laneqv2si (__a, __b, __c); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqdmull_n_s32 (int32x2_t __a, int32_t __b) +{ + return __builtin_aarch64_sqdmull_nv2si (__a, __b); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqdmullh_s16 (int16_t __a, int16_t __b) +{ + return (int32_t) __builtin_aarch64_sqdmullhi (__a, __b); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqdmullh_lane_s16 (int16_t __a, int16x4_t __b, const int __c) +{ + return __builtin_aarch64_sqdmull_lanehi (__a, __b, __c); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vqdmulls_s32 (int32_t __a, int32_t __b) +{ + return (int64x1_t) __builtin_aarch64_sqdmullsi (__a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vqdmulls_lane_s32 (int32_t __a, int32x2_t __b, const int __c) +{ + return __builtin_aarch64_sqdmull_lanesi (__a, __b, __c); +} + +/* vqmovn */ + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vqmovn_s16 (int16x8_t __a) +{ + return (int8x8_t) __builtin_aarch64_sqmovnv8hi (__a); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vqmovn_s32 (int32x4_t __a) +{ + return (int16x4_t) __builtin_aarch64_sqmovnv4si (__a); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vqmovn_s64 (int64x2_t __a) +{ + return (int32x2_t) __builtin_aarch64_sqmovnv2di (__a); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vqmovn_u16 (uint16x8_t __a) +{ + return (uint8x8_t) __builtin_aarch64_uqmovnv8hi ((int16x8_t) __a); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vqmovn_u32 (uint32x4_t __a) +{ + return (uint16x4_t) __builtin_aarch64_uqmovnv4si ((int32x4_t) __a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vqmovn_u64 (uint64x2_t __a) +{ + return (uint32x2_t) __builtin_aarch64_uqmovnv2di ((int64x2_t) __a); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vqmovnh_s16 (int16_t __a) +{ + return (int8_t) __builtin_aarch64_sqmovnhi (__a); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vqmovns_s32 (int32_t __a) +{ + return (int16_t) __builtin_aarch64_sqmovnsi (__a); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqmovnd_s64 (int64x1_t __a) +{ + return (int32_t) __builtin_aarch64_sqmovndi (__a); +} + +__extension__ static __inline uint8_t __attribute__ ((__always_inline__)) +vqmovnh_u16 (uint16_t __a) +{ + return (uint8_t) __builtin_aarch64_uqmovnhi (__a); +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vqmovns_u32 (uint32_t __a) +{ + return (uint16_t) __builtin_aarch64_uqmovnsi (__a); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vqmovnd_u64 (uint64x1_t __a) +{ + return (uint32_t) __builtin_aarch64_uqmovndi (__a); +} + +/* vqmovun */ + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vqmovun_s16 (int16x8_t __a) +{ + return (uint8x8_t) __builtin_aarch64_sqmovunv8hi (__a); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vqmovun_s32 (int32x4_t __a) +{ + return (uint16x4_t) __builtin_aarch64_sqmovunv4si (__a); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vqmovun_s64 (int64x2_t __a) +{ + return (uint32x2_t) __builtin_aarch64_sqmovunv2di (__a); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vqmovunh_s16 (int16_t __a) +{ + return (int8_t) __builtin_aarch64_sqmovunhi (__a); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vqmovuns_s32 (int32_t __a) +{ + return (int16_t) __builtin_aarch64_sqmovunsi (__a); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqmovund_s64 (int64x1_t __a) +{ + return (int32_t) __builtin_aarch64_sqmovundi (__a); +} + +/* vqneg */ + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqnegq_s64 (int64x2_t __a) +{ + return (int64x2_t) __builtin_aarch64_sqnegv2di (__a); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vqnegb_s8 (int8_t __a) +{ + return (int8_t) __builtin_aarch64_sqnegqi (__a); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vqnegh_s16 (int16_t __a) +{ + return (int16_t) __builtin_aarch64_sqneghi (__a); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqnegs_s32 (int32_t __a) +{ + return (int32_t) __builtin_aarch64_sqnegsi (__a); +} + +/* vqrdmulh */ + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vqrdmulh_lane_s16 (int16x4_t __a, int16x4_t __b, const int __c) +{ + return __builtin_aarch64_sqrdmulh_lanev4hi (__a, __b, __c); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vqrdmulh_lane_s32 (int32x2_t __a, int32x2_t __b, const int __c) +{ + return __builtin_aarch64_sqrdmulh_lanev2si (__a, __b, __c); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vqrdmulhq_lane_s16 (int16x8_t __a, int16x4_t __b, const int __c) +{ + return __builtin_aarch64_sqrdmulh_lanev8hi (__a, __b, __c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqrdmulhq_lane_s32 (int32x4_t __a, int32x2_t __b, const int __c) +{ + return __builtin_aarch64_sqrdmulh_lanev4si (__a, __b, __c); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vqrdmulhh_s16 (int16_t __a, int16_t __b) +{ + return (int16_t) __builtin_aarch64_sqrdmulhhi (__a, __b); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vqrdmulhh_lane_s16 (int16_t __a, int16x4_t __b, const int __c) +{ + return __builtin_aarch64_sqrdmulh_lanehi (__a, __b, __c); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqrdmulhs_s32 (int32_t __a, int32_t __b) +{ + return (int32_t) __builtin_aarch64_sqrdmulhsi (__a, __b); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqrdmulhs_lane_s32 (int32_t __a, int32x2_t __b, const int __c) +{ + return __builtin_aarch64_sqrdmulh_lanesi (__a, __b, __c); +} + +/* vqrshl */ + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vqrshl_s8 (int8x8_t __a, int8x8_t __b) +{ + return __builtin_aarch64_sqrshlv8qi (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vqrshl_s16 (int16x4_t __a, int16x4_t __b) +{ + return __builtin_aarch64_sqrshlv4hi (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vqrshl_s32 (int32x2_t __a, int32x2_t __b) +{ + return __builtin_aarch64_sqrshlv2si (__a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vqrshl_s64 (int64x1_t __a, int64x1_t __b) +{ + return __builtin_aarch64_sqrshldi (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vqrshl_u8 (uint8x8_t __a, int8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_uqrshlv8qi ((int8x8_t) __a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vqrshl_u16 (uint16x4_t __a, int16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_uqrshlv4hi ((int16x4_t) __a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vqrshl_u32 (uint32x2_t __a, int32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_uqrshlv2si ((int32x2_t) __a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vqrshl_u64 (uint64x1_t __a, int64x1_t __b) +{ + return (uint64x1_t) __builtin_aarch64_uqrshldi ((int64x1_t) __a, __b); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vqrshlq_s8 (int8x16_t __a, int8x16_t __b) +{ + return __builtin_aarch64_sqrshlv16qi (__a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vqrshlq_s16 (int16x8_t __a, int16x8_t __b) +{ + return __builtin_aarch64_sqrshlv8hi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqrshlq_s32 (int32x4_t __a, int32x4_t __b) +{ + return __builtin_aarch64_sqrshlv4si (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqrshlq_s64 (int64x2_t __a, int64x2_t __b) +{ + return __builtin_aarch64_sqrshlv2di (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vqrshlq_u8 (uint8x16_t __a, int8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_uqrshlv16qi ((int8x16_t) __a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vqrshlq_u16 (uint16x8_t __a, int16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_uqrshlv8hi ((int16x8_t) __a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vqrshlq_u32 (uint32x4_t __a, int32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_uqrshlv4si ((int32x4_t) __a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vqrshlq_u64 (uint64x2_t __a, int64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_uqrshlv2di ((int64x2_t) __a, __b); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vqrshlb_s8 (int8_t __a, int8_t __b) +{ + return __builtin_aarch64_sqrshlqi (__a, __b); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vqrshlh_s16 (int16_t __a, int16_t __b) +{ + return __builtin_aarch64_sqrshlhi (__a, __b); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqrshls_s32 (int32_t __a, int32_t __b) +{ + return __builtin_aarch64_sqrshlsi (__a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vqrshld_s64 (int64x1_t __a, int64x1_t __b) +{ + return __builtin_aarch64_sqrshldi (__a, __b); +} + +__extension__ static __inline uint8_t __attribute__ ((__always_inline__)) +vqrshlb_u8 (uint8_t __a, uint8_t __b) +{ + return (uint8_t) __builtin_aarch64_uqrshlqi (__a, __b); +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vqrshlh_u16 (uint16_t __a, uint16_t __b) +{ + return (uint16_t) __builtin_aarch64_uqrshlhi (__a, __b); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vqrshls_u32 (uint32_t __a, uint32_t __b) +{ + return (uint32_t) __builtin_aarch64_uqrshlsi (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vqrshld_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return (uint64x1_t) __builtin_aarch64_uqrshldi (__a, __b); +} + +/* vqrshrn */ + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vqrshrn_n_s16 (int16x8_t __a, const int __b) +{ + return (int8x8_t) __builtin_aarch64_sqrshrn_nv8hi (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vqrshrn_n_s32 (int32x4_t __a, const int __b) +{ + return (int16x4_t) __builtin_aarch64_sqrshrn_nv4si (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vqrshrn_n_s64 (int64x2_t __a, const int __b) +{ + return (int32x2_t) __builtin_aarch64_sqrshrn_nv2di (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vqrshrn_n_u16 (uint16x8_t __a, const int __b) +{ + return (uint8x8_t) __builtin_aarch64_uqrshrn_nv8hi ((int16x8_t) __a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vqrshrn_n_u32 (uint32x4_t __a, const int __b) +{ + return (uint16x4_t) __builtin_aarch64_uqrshrn_nv4si ((int32x4_t) __a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vqrshrn_n_u64 (uint64x2_t __a, const int __b) +{ + return (uint32x2_t) __builtin_aarch64_uqrshrn_nv2di ((int64x2_t) __a, __b); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vqrshrnh_n_s16 (int16_t __a, const int __b) +{ + return (int8_t) __builtin_aarch64_sqrshrn_nhi (__a, __b); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vqrshrns_n_s32 (int32_t __a, const int __b) +{ + return (int16_t) __builtin_aarch64_sqrshrn_nsi (__a, __b); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqrshrnd_n_s64 (int64x1_t __a, const int __b) +{ + return (int32_t) __builtin_aarch64_sqrshrn_ndi (__a, __b); +} + +__extension__ static __inline uint8_t __attribute__ ((__always_inline__)) +vqrshrnh_n_u16 (uint16_t __a, const int __b) +{ + return (uint8_t) __builtin_aarch64_uqrshrn_nhi (__a, __b); +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vqrshrns_n_u32 (uint32_t __a, const int __b) +{ + return (uint16_t) __builtin_aarch64_uqrshrn_nsi (__a, __b); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vqrshrnd_n_u64 (uint64x1_t __a, const int __b) +{ + return (uint32_t) __builtin_aarch64_uqrshrn_ndi (__a, __b); +} + +/* vqrshrun */ + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vqrshrun_n_s16 (int16x8_t __a, const int __b) +{ + return (uint8x8_t) __builtin_aarch64_sqrshrun_nv8hi (__a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vqrshrun_n_s32 (int32x4_t __a, const int __b) +{ + return (uint16x4_t) __builtin_aarch64_sqrshrun_nv4si (__a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vqrshrun_n_s64 (int64x2_t __a, const int __b) +{ + return (uint32x2_t) __builtin_aarch64_sqrshrun_nv2di (__a, __b); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vqrshrunh_n_s16 (int16_t __a, const int __b) +{ + return (int8_t) __builtin_aarch64_sqrshrun_nhi (__a, __b); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vqrshruns_n_s32 (int32_t __a, const int __b) +{ + return (int16_t) __builtin_aarch64_sqrshrun_nsi (__a, __b); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqrshrund_n_s64 (int64x1_t __a, const int __b) +{ + return (int32_t) __builtin_aarch64_sqrshrun_ndi (__a, __b); +} + +/* vqshl */ + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vqshl_s8 (int8x8_t __a, int8x8_t __b) +{ + return __builtin_aarch64_sqshlv8qi (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vqshl_s16 (int16x4_t __a, int16x4_t __b) +{ + return __builtin_aarch64_sqshlv4hi (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vqshl_s32 (int32x2_t __a, int32x2_t __b) +{ + return __builtin_aarch64_sqshlv2si (__a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vqshl_s64 (int64x1_t __a, int64x1_t __b) +{ + return __builtin_aarch64_sqshldi (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vqshl_u8 (uint8x8_t __a, int8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_uqshlv8qi ((int8x8_t) __a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vqshl_u16 (uint16x4_t __a, int16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_uqshlv4hi ((int16x4_t) __a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vqshl_u32 (uint32x2_t __a, int32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_uqshlv2si ((int32x2_t) __a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vqshl_u64 (uint64x1_t __a, int64x1_t __b) +{ + return (uint64x1_t) __builtin_aarch64_uqshldi ((int64x1_t) __a, __b); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vqshlq_s8 (int8x16_t __a, int8x16_t __b) +{ + return __builtin_aarch64_sqshlv16qi (__a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vqshlq_s16 (int16x8_t __a, int16x8_t __b) +{ + return __builtin_aarch64_sqshlv8hi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqshlq_s32 (int32x4_t __a, int32x4_t __b) +{ + return __builtin_aarch64_sqshlv4si (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqshlq_s64 (int64x2_t __a, int64x2_t __b) +{ + return __builtin_aarch64_sqshlv2di (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vqshlq_u8 (uint8x16_t __a, int8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_uqshlv16qi ((int8x16_t) __a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vqshlq_u16 (uint16x8_t __a, int16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_uqshlv8hi ((int16x8_t) __a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vqshlq_u32 (uint32x4_t __a, int32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_uqshlv4si ((int32x4_t) __a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vqshlq_u64 (uint64x2_t __a, int64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_uqshlv2di ((int64x2_t) __a, __b); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vqshlb_s8 (int8_t __a, int8_t __b) +{ + return __builtin_aarch64_sqshlqi (__a, __b); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vqshlh_s16 (int16_t __a, int16_t __b) +{ + return __builtin_aarch64_sqshlhi (__a, __b); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqshls_s32 (int32_t __a, int32_t __b) +{ + return __builtin_aarch64_sqshlsi (__a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vqshld_s64 (int64x1_t __a, int64x1_t __b) +{ + return __builtin_aarch64_sqshldi (__a, __b); +} + +__extension__ static __inline uint8_t __attribute__ ((__always_inline__)) +vqshlb_u8 (uint8_t __a, uint8_t __b) +{ + return (uint8_t) __builtin_aarch64_uqshlqi (__a, __b); +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vqshlh_u16 (uint16_t __a, uint16_t __b) +{ + return (uint16_t) __builtin_aarch64_uqshlhi (__a, __b); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vqshls_u32 (uint32_t __a, uint32_t __b) +{ + return (uint32_t) __builtin_aarch64_uqshlsi (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vqshld_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return (uint64x1_t) __builtin_aarch64_uqshldi (__a, __b); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vqshl_n_s8 (int8x8_t __a, const int __b) +{ + return (int8x8_t) __builtin_aarch64_sqshl_nv8qi (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vqshl_n_s16 (int16x4_t __a, const int __b) +{ + return (int16x4_t) __builtin_aarch64_sqshl_nv4hi (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vqshl_n_s32 (int32x2_t __a, const int __b) +{ + return (int32x2_t) __builtin_aarch64_sqshl_nv2si (__a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vqshl_n_s64 (int64x1_t __a, const int __b) +{ + return (int64x1_t) __builtin_aarch64_sqshl_ndi (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vqshl_n_u8 (uint8x8_t __a, const int __b) +{ + return (uint8x8_t) __builtin_aarch64_uqshl_nv8qi ((int8x8_t) __a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vqshl_n_u16 (uint16x4_t __a, const int __b) +{ + return (uint16x4_t) __builtin_aarch64_uqshl_nv4hi ((int16x4_t) __a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vqshl_n_u32 (uint32x2_t __a, const int __b) +{ + return (uint32x2_t) __builtin_aarch64_uqshl_nv2si ((int32x2_t) __a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vqshl_n_u64 (uint64x1_t __a, const int __b) +{ + return (uint64x1_t) __builtin_aarch64_uqshl_ndi ((int64x1_t) __a, __b); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vqshlq_n_s8 (int8x16_t __a, const int __b) +{ + return (int8x16_t) __builtin_aarch64_sqshl_nv16qi (__a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vqshlq_n_s16 (int16x8_t __a, const int __b) +{ + return (int16x8_t) __builtin_aarch64_sqshl_nv8hi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vqshlq_n_s32 (int32x4_t __a, const int __b) +{ + return (int32x4_t) __builtin_aarch64_sqshl_nv4si (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vqshlq_n_s64 (int64x2_t __a, const int __b) +{ + return (int64x2_t) __builtin_aarch64_sqshl_nv2di (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vqshlq_n_u8 (uint8x16_t __a, const int __b) +{ + return (uint8x16_t) __builtin_aarch64_uqshl_nv16qi ((int8x16_t) __a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vqshlq_n_u16 (uint16x8_t __a, const int __b) +{ + return (uint16x8_t) __builtin_aarch64_uqshl_nv8hi ((int16x8_t) __a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vqshlq_n_u32 (uint32x4_t __a, const int __b) +{ + return (uint32x4_t) __builtin_aarch64_uqshl_nv4si ((int32x4_t) __a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vqshlq_n_u64 (uint64x2_t __a, const int __b) +{ + return (uint64x2_t) __builtin_aarch64_uqshl_nv2di ((int64x2_t) __a, __b); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vqshlb_n_s8 (int8_t __a, const int __b) +{ + return (int8_t) __builtin_aarch64_sqshl_nqi (__a, __b); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vqshlh_n_s16 (int16_t __a, const int __b) +{ + return (int16_t) __builtin_aarch64_sqshl_nhi (__a, __b); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqshls_n_s32 (int32_t __a, const int __b) +{ + return (int32_t) __builtin_aarch64_sqshl_nsi (__a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vqshld_n_s64 (int64x1_t __a, const int __b) +{ + return (int64x1_t) __builtin_aarch64_sqshl_ndi (__a, __b); +} + +__extension__ static __inline uint8_t __attribute__ ((__always_inline__)) +vqshlb_n_u8 (uint8_t __a, const int __b) +{ + return (uint8_t) __builtin_aarch64_uqshl_nqi (__a, __b); +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vqshlh_n_u16 (uint16_t __a, const int __b) +{ + return (uint16_t) __builtin_aarch64_uqshl_nhi (__a, __b); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vqshls_n_u32 (uint32_t __a, const int __b) +{ + return (uint32_t) __builtin_aarch64_uqshl_nsi (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vqshld_n_u64 (uint64x1_t __a, const int __b) +{ + return (uint64x1_t) __builtin_aarch64_uqshl_ndi (__a, __b); +} + +/* vqshlu */ + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vqshlu_n_s8 (int8x8_t __a, const int __b) +{ + return (uint8x8_t) __builtin_aarch64_sqshlu_nv8qi (__a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vqshlu_n_s16 (int16x4_t __a, const int __b) +{ + return (uint16x4_t) __builtin_aarch64_sqshlu_nv4hi (__a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vqshlu_n_s32 (int32x2_t __a, const int __b) +{ + return (uint32x2_t) __builtin_aarch64_sqshlu_nv2si (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vqshlu_n_s64 (int64x1_t __a, const int __b) +{ + return (uint64x1_t) __builtin_aarch64_sqshlu_ndi (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vqshluq_n_s8 (int8x16_t __a, const int __b) +{ + return (uint8x16_t) __builtin_aarch64_sqshlu_nv16qi (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vqshluq_n_s16 (int16x8_t __a, const int __b) +{ + return (uint16x8_t) __builtin_aarch64_sqshlu_nv8hi (__a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vqshluq_n_s32 (int32x4_t __a, const int __b) +{ + return (uint32x4_t) __builtin_aarch64_sqshlu_nv4si (__a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vqshluq_n_s64 (int64x2_t __a, const int __b) +{ + return (uint64x2_t) __builtin_aarch64_sqshlu_nv2di (__a, __b); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vqshlub_n_s8 (int8_t __a, const int __b) +{ + return (int8_t) __builtin_aarch64_sqshlu_nqi (__a, __b); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vqshluh_n_s16 (int16_t __a, const int __b) +{ + return (int16_t) __builtin_aarch64_sqshlu_nhi (__a, __b); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqshlus_n_s32 (int32_t __a, const int __b) +{ + return (int32_t) __builtin_aarch64_sqshlu_nsi (__a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vqshlud_n_s64 (int64x1_t __a, const int __b) +{ + return (int64x1_t) __builtin_aarch64_sqshlu_ndi (__a, __b); +} + +/* vqshrn */ + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vqshrn_n_s16 (int16x8_t __a, const int __b) +{ + return (int8x8_t) __builtin_aarch64_sqshrn_nv8hi (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vqshrn_n_s32 (int32x4_t __a, const int __b) +{ + return (int16x4_t) __builtin_aarch64_sqshrn_nv4si (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vqshrn_n_s64 (int64x2_t __a, const int __b) +{ + return (int32x2_t) __builtin_aarch64_sqshrn_nv2di (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vqshrn_n_u16 (uint16x8_t __a, const int __b) +{ + return (uint8x8_t) __builtin_aarch64_uqshrn_nv8hi ((int16x8_t) __a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vqshrn_n_u32 (uint32x4_t __a, const int __b) +{ + return (uint16x4_t) __builtin_aarch64_uqshrn_nv4si ((int32x4_t) __a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vqshrn_n_u64 (uint64x2_t __a, const int __b) +{ + return (uint32x2_t) __builtin_aarch64_uqshrn_nv2di ((int64x2_t) __a, __b); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vqshrnh_n_s16 (int16_t __a, const int __b) +{ + return (int8_t) __builtin_aarch64_sqshrn_nhi (__a, __b); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vqshrns_n_s32 (int32_t __a, const int __b) +{ + return (int16_t) __builtin_aarch64_sqshrn_nsi (__a, __b); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqshrnd_n_s64 (int64x1_t __a, const int __b) +{ + return (int32_t) __builtin_aarch64_sqshrn_ndi (__a, __b); +} + +__extension__ static __inline uint8_t __attribute__ ((__always_inline__)) +vqshrnh_n_u16 (uint16_t __a, const int __b) +{ + return (uint8_t) __builtin_aarch64_uqshrn_nhi (__a, __b); +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vqshrns_n_u32 (uint32_t __a, const int __b) +{ + return (uint16_t) __builtin_aarch64_uqshrn_nsi (__a, __b); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vqshrnd_n_u64 (uint64x1_t __a, const int __b) +{ + return (uint32_t) __builtin_aarch64_uqshrn_ndi (__a, __b); +} + +/* vqshrun */ + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vqshrun_n_s16 (int16x8_t __a, const int __b) +{ + return (uint8x8_t) __builtin_aarch64_sqshrun_nv8hi (__a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vqshrun_n_s32 (int32x4_t __a, const int __b) +{ + return (uint16x4_t) __builtin_aarch64_sqshrun_nv4si (__a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vqshrun_n_s64 (int64x2_t __a, const int __b) +{ + return (uint32x2_t) __builtin_aarch64_sqshrun_nv2di (__a, __b); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vqshrunh_n_s16 (int16_t __a, const int __b) +{ + return (int8_t) __builtin_aarch64_sqshrun_nhi (__a, __b); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vqshruns_n_s32 (int32_t __a, const int __b) +{ + return (int16_t) __builtin_aarch64_sqshrun_nsi (__a, __b); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqshrund_n_s64 (int64x1_t __a, const int __b) +{ + return (int32_t) __builtin_aarch64_sqshrun_ndi (__a, __b); +} + +/* vqsub */ + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vqsubb_s8 (int8_t __a, int8_t __b) +{ + return (int8_t) __builtin_aarch64_sqsubqi (__a, __b); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vqsubh_s16 (int16_t __a, int16_t __b) +{ + return (int16_t) __builtin_aarch64_sqsubhi (__a, __b); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vqsubs_s32 (int32_t __a, int32_t __b) +{ + return (int32_t) __builtin_aarch64_sqsubsi (__a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vqsubd_s64 (int64x1_t __a, int64x1_t __b) +{ + return (int64x1_t) __builtin_aarch64_sqsubdi (__a, __b); +} + +__extension__ static __inline uint8_t __attribute__ ((__always_inline__)) +vqsubb_u8 (uint8_t __a, uint8_t __b) +{ + return (uint8_t) __builtin_aarch64_uqsubqi (__a, __b); +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vqsubh_u16 (uint16_t __a, uint16_t __b) +{ + return (uint16_t) __builtin_aarch64_uqsubhi (__a, __b); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vqsubs_u32 (uint32_t __a, uint32_t __b) +{ + return (uint32_t) __builtin_aarch64_uqsubsi (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vqsubd_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return (uint64x1_t) __builtin_aarch64_uqsubdi (__a, __b); +} + +/* vrecpe */ + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vrecpes_f32 (float32_t __a) +{ + return __builtin_aarch64_frecpesf (__a); +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vrecped_f64 (float64_t __a) +{ + return __builtin_aarch64_frecpedf (__a); +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vrecpe_f32 (float32x2_t __a) +{ + return __builtin_aarch64_frecpev2sf (__a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vrecpeq_f32 (float32x4_t __a) +{ + return __builtin_aarch64_frecpev4sf (__a); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vrecpeq_f64 (float64x2_t __a) +{ + return __builtin_aarch64_frecpev2df (__a); +} + +/* vrecps */ + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vrecpss_f32 (float32_t __a, float32_t __b) +{ + return __builtin_aarch64_frecpssf (__a, __b); +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vrecpsd_f64 (float64_t __a, float64_t __b) +{ + return __builtin_aarch64_frecpsdf (__a, __b); +} + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vrecps_f32 (float32x2_t __a, float32x2_t __b) +{ + return __builtin_aarch64_frecpsv2sf (__a, __b); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vrecpsq_f32 (float32x4_t __a, float32x4_t __b) +{ + return __builtin_aarch64_frecpsv4sf (__a, __b); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vrecpsq_f64 (float64x2_t __a, float64x2_t __b) +{ + return __builtin_aarch64_frecpsv2df (__a, __b); +} + +/* vrecpx */ + +__extension__ static __inline float32_t __attribute__ ((__always_inline__)) +vrecpxs_f32 (float32_t __a) +{ + return __builtin_aarch64_frecpxsf (__a); +} + +__extension__ static __inline float64_t __attribute__ ((__always_inline__)) +vrecpxd_f64 (float64_t __a) +{ + return __builtin_aarch64_frecpxdf (__a); +} + +/* vrnd */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vrnd_f32 (float32x2_t __a) +{ + return __builtin_aarch64_btruncv2sf (__a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vrndq_f32 (float32x4_t __a) +{ + return __builtin_aarch64_btruncv4sf (__a); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vrndq_f64 (float64x2_t __a) +{ + return __builtin_aarch64_btruncv2df (__a); +} + +/* vrnda */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vrnda_f32 (float32x2_t __a) +{ + return __builtin_aarch64_roundv2sf (__a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vrndaq_f32 (float32x4_t __a) +{ + return __builtin_aarch64_roundv4sf (__a); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vrndaq_f64 (float64x2_t __a) +{ + return __builtin_aarch64_roundv2df (__a); +} + +/* vrndi */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vrndi_f32 (float32x2_t __a) +{ + return __builtin_aarch64_nearbyintv2sf (__a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vrndiq_f32 (float32x4_t __a) +{ + return __builtin_aarch64_nearbyintv4sf (__a); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vrndiq_f64 (float64x2_t __a) +{ + return __builtin_aarch64_nearbyintv2df (__a); +} + +/* vrndm */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vrndm_f32 (float32x2_t __a) +{ + return __builtin_aarch64_floorv2sf (__a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vrndmq_f32 (float32x4_t __a) +{ + return __builtin_aarch64_floorv4sf (__a); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vrndmq_f64 (float64x2_t __a) +{ + return __builtin_aarch64_floorv2df (__a); +} + +/* vrndn */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vrndn_f32 (float32x2_t __a) +{ + return __builtin_aarch64_frintnv2sf (__a); +} +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vrndnq_f32 (float32x4_t __a) +{ + return __builtin_aarch64_frintnv4sf (__a); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vrndnq_f64 (float64x2_t __a) +{ + return __builtin_aarch64_frintnv2df (__a); +} + +/* vrndp */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vrndp_f32 (float32x2_t __a) +{ + return __builtin_aarch64_ceilv2sf (__a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vrndpq_f32 (float32x4_t __a) +{ + return __builtin_aarch64_ceilv4sf (__a); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vrndpq_f64 (float64x2_t __a) +{ + return __builtin_aarch64_ceilv2df (__a); +} + +/* vrndx */ + +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vrndx_f32 (float32x2_t __a) +{ + return __builtin_aarch64_rintv2sf (__a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vrndxq_f32 (float32x4_t __a) +{ + return __builtin_aarch64_rintv4sf (__a); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vrndxq_f64 (float64x2_t __a) +{ + return __builtin_aarch64_rintv2df (__a); +} + +/* vrshl */ + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vrshl_s8 (int8x8_t __a, int8x8_t __b) +{ + return (int8x8_t) __builtin_aarch64_srshlv8qi (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vrshl_s16 (int16x4_t __a, int16x4_t __b) +{ + return (int16x4_t) __builtin_aarch64_srshlv4hi (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vrshl_s32 (int32x2_t __a, int32x2_t __b) +{ + return (int32x2_t) __builtin_aarch64_srshlv2si (__a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vrshl_s64 (int64x1_t __a, int64x1_t __b) +{ + return (int64x1_t) __builtin_aarch64_srshldi (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vrshl_u8 (uint8x8_t __a, int8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_urshlv8qi ((int8x8_t) __a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vrshl_u16 (uint16x4_t __a, int16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_urshlv4hi ((int16x4_t) __a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vrshl_u32 (uint32x2_t __a, int32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_urshlv2si ((int32x2_t) __a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vrshl_u64 (uint64x1_t __a, int64x1_t __b) +{ + return (uint64x1_t) __builtin_aarch64_urshldi ((int64x1_t) __a, __b); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vrshlq_s8 (int8x16_t __a, int8x16_t __b) +{ + return (int8x16_t) __builtin_aarch64_srshlv16qi (__a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vrshlq_s16 (int16x8_t __a, int16x8_t __b) +{ + return (int16x8_t) __builtin_aarch64_srshlv8hi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vrshlq_s32 (int32x4_t __a, int32x4_t __b) +{ + return (int32x4_t) __builtin_aarch64_srshlv4si (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vrshlq_s64 (int64x2_t __a, int64x2_t __b) +{ + return (int64x2_t) __builtin_aarch64_srshlv2di (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vrshlq_u8 (uint8x16_t __a, int8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_urshlv16qi ((int8x16_t) __a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vrshlq_u16 (uint16x8_t __a, int16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_urshlv8hi ((int16x8_t) __a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vrshlq_u32 (uint32x4_t __a, int32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_urshlv4si ((int32x4_t) __a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vrshlq_u64 (uint64x2_t __a, int64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_urshlv2di ((int64x2_t) __a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vrshld_s64 (int64x1_t __a, int64x1_t __b) +{ + return (int64x1_t) __builtin_aarch64_srshldi (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vrshld_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return (uint64x1_t) __builtin_aarch64_urshldi (__a, __b); +} + +/* vrshr */ + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vrshr_n_s8 (int8x8_t __a, const int __b) +{ + return (int8x8_t) __builtin_aarch64_srshr_nv8qi (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vrshr_n_s16 (int16x4_t __a, const int __b) +{ + return (int16x4_t) __builtin_aarch64_srshr_nv4hi (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vrshr_n_s32 (int32x2_t __a, const int __b) +{ + return (int32x2_t) __builtin_aarch64_srshr_nv2si (__a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vrshr_n_s64 (int64x1_t __a, const int __b) +{ + return (int64x1_t) __builtin_aarch64_srshr_ndi (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vrshr_n_u8 (uint8x8_t __a, const int __b) +{ + return (uint8x8_t) __builtin_aarch64_urshr_nv8qi ((int8x8_t) __a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vrshr_n_u16 (uint16x4_t __a, const int __b) +{ + return (uint16x4_t) __builtin_aarch64_urshr_nv4hi ((int16x4_t) __a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vrshr_n_u32 (uint32x2_t __a, const int __b) +{ + return (uint32x2_t) __builtin_aarch64_urshr_nv2si ((int32x2_t) __a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vrshr_n_u64 (uint64x1_t __a, const int __b) +{ + return (uint64x1_t) __builtin_aarch64_urshr_ndi ((int64x1_t) __a, __b); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vrshrq_n_s8 (int8x16_t __a, const int __b) +{ + return (int8x16_t) __builtin_aarch64_srshr_nv16qi (__a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vrshrq_n_s16 (int16x8_t __a, const int __b) +{ + return (int16x8_t) __builtin_aarch64_srshr_nv8hi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vrshrq_n_s32 (int32x4_t __a, const int __b) +{ + return (int32x4_t) __builtin_aarch64_srshr_nv4si (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vrshrq_n_s64 (int64x2_t __a, const int __b) +{ + return (int64x2_t) __builtin_aarch64_srshr_nv2di (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vrshrq_n_u8 (uint8x16_t __a, const int __b) +{ + return (uint8x16_t) __builtin_aarch64_urshr_nv16qi ((int8x16_t) __a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vrshrq_n_u16 (uint16x8_t __a, const int __b) +{ + return (uint16x8_t) __builtin_aarch64_urshr_nv8hi ((int16x8_t) __a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vrshrq_n_u32 (uint32x4_t __a, const int __b) +{ + return (uint32x4_t) __builtin_aarch64_urshr_nv4si ((int32x4_t) __a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vrshrq_n_u64 (uint64x2_t __a, const int __b) +{ + return (uint64x2_t) __builtin_aarch64_urshr_nv2di ((int64x2_t) __a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vrshrd_n_s64 (int64x1_t __a, const int __b) +{ + return (int64x1_t) __builtin_aarch64_srshr_ndi (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vrshrd_n_u64 (uint64x1_t __a, const int __b) +{ + return (uint64x1_t) __builtin_aarch64_urshr_ndi (__a, __b); +} + +/* vrsra */ + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vrsra_n_s8 (int8x8_t __a, int8x8_t __b, const int __c) +{ + return (int8x8_t) __builtin_aarch64_srsra_nv8qi (__a, __b, __c); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vrsra_n_s16 (int16x4_t __a, int16x4_t __b, const int __c) +{ + return (int16x4_t) __builtin_aarch64_srsra_nv4hi (__a, __b, __c); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vrsra_n_s32 (int32x2_t __a, int32x2_t __b, const int __c) +{ + return (int32x2_t) __builtin_aarch64_srsra_nv2si (__a, __b, __c); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vrsra_n_s64 (int64x1_t __a, int64x1_t __b, const int __c) +{ + return (int64x1_t) __builtin_aarch64_srsra_ndi (__a, __b, __c); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vrsra_n_u8 (uint8x8_t __a, uint8x8_t __b, const int __c) +{ + return (uint8x8_t) __builtin_aarch64_ursra_nv8qi ((int8x8_t) __a, + (int8x8_t) __b, __c); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vrsra_n_u16 (uint16x4_t __a, uint16x4_t __b, const int __c) +{ + return (uint16x4_t) __builtin_aarch64_ursra_nv4hi ((int16x4_t) __a, + (int16x4_t) __b, __c); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vrsra_n_u32 (uint32x2_t __a, uint32x2_t __b, const int __c) +{ + return (uint32x2_t) __builtin_aarch64_ursra_nv2si ((int32x2_t) __a, + (int32x2_t) __b, __c); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vrsra_n_u64 (uint64x1_t __a, uint64x1_t __b, const int __c) +{ + return (uint64x1_t) __builtin_aarch64_ursra_ndi ((int64x1_t) __a, + (int64x1_t) __b, __c); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vrsraq_n_s8 (int8x16_t __a, int8x16_t __b, const int __c) +{ + return (int8x16_t) __builtin_aarch64_srsra_nv16qi (__a, __b, __c); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vrsraq_n_s16 (int16x8_t __a, int16x8_t __b, const int __c) +{ + return (int16x8_t) __builtin_aarch64_srsra_nv8hi (__a, __b, __c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vrsraq_n_s32 (int32x4_t __a, int32x4_t __b, const int __c) +{ + return (int32x4_t) __builtin_aarch64_srsra_nv4si (__a, __b, __c); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vrsraq_n_s64 (int64x2_t __a, int64x2_t __b, const int __c) +{ + return (int64x2_t) __builtin_aarch64_srsra_nv2di (__a, __b, __c); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vrsraq_n_u8 (uint8x16_t __a, uint8x16_t __b, const int __c) +{ + return (uint8x16_t) __builtin_aarch64_ursra_nv16qi ((int8x16_t) __a, + (int8x16_t) __b, __c); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vrsraq_n_u16 (uint16x8_t __a, uint16x8_t __b, const int __c) +{ + return (uint16x8_t) __builtin_aarch64_ursra_nv8hi ((int16x8_t) __a, + (int16x8_t) __b, __c); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vrsraq_n_u32 (uint32x4_t __a, uint32x4_t __b, const int __c) +{ + return (uint32x4_t) __builtin_aarch64_ursra_nv4si ((int32x4_t) __a, + (int32x4_t) __b, __c); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vrsraq_n_u64 (uint64x2_t __a, uint64x2_t __b, const int __c) +{ + return (uint64x2_t) __builtin_aarch64_ursra_nv2di ((int64x2_t) __a, + (int64x2_t) __b, __c); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vrsrad_n_s64 (int64x1_t __a, int64x1_t __b, const int __c) +{ + return (int64x1_t) __builtin_aarch64_srsra_ndi (__a, __b, __c); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vrsrad_n_u64 (uint64x1_t __a, uint64x1_t __b, const int __c) +{ + return (uint64x1_t) __builtin_aarch64_ursra_ndi (__a, __b, __c); +} + +#ifdef __ARM_FEATURE_CRYPTO + +/* vsha1 */ + +static __inline uint32x4_t +vsha1cq_u32 (uint32x4_t hash_abcd, uint32_t hash_e, uint32x4_t wk) +{ + return __builtin_aarch64_crypto_sha1cv4si_uuuu (hash_abcd, hash_e, wk); +} +static __inline uint32x4_t +vsha1mq_u32 (uint32x4_t hash_abcd, uint32_t hash_e, uint32x4_t wk) +{ + return __builtin_aarch64_crypto_sha1mv4si_uuuu (hash_abcd, hash_e, wk); +} +static __inline uint32x4_t +vsha1pq_u32 (uint32x4_t hash_abcd, uint32_t hash_e, uint32x4_t wk) +{ + return __builtin_aarch64_crypto_sha1pv4si_uuuu (hash_abcd, hash_e, wk); +} + +static __inline uint32_t +vsha1h_u32 (uint32_t hash_e) +{ + return __builtin_aarch64_crypto_sha1hsi_uu (hash_e); +} + +static __inline uint32x4_t +vsha1su0q_u32 (uint32x4_t w0_3, uint32x4_t w4_7, uint32x4_t w8_11) +{ + return __builtin_aarch64_crypto_sha1su0v4si_uuuu (w0_3, w4_7, w8_11); +} + +static __inline uint32x4_t +vsha1su1q_u32 (uint32x4_t tw0_3, uint32x4_t w12_15) +{ + return __builtin_aarch64_crypto_sha1su1v4si_uuu (tw0_3, w12_15); +} + +static __inline uint32x4_t +vsha256hq_u32 (uint32x4_t hash_abcd, uint32x4_t hash_efgh, uint32x4_t wk) +{ + return __builtin_aarch64_crypto_sha256hv4si_uuuu (hash_abcd, hash_efgh, wk); +} + +static __inline uint32x4_t +vsha256h2q_u32 (uint32x4_t hash_efgh, uint32x4_t hash_abcd, uint32x4_t wk) +{ + return __builtin_aarch64_crypto_sha256h2v4si_uuuu (hash_efgh, hash_abcd, wk); +} + +static __inline uint32x4_t +vsha256su0q_u32 (uint32x4_t w0_3, uint32x4_t w4_7) +{ + return __builtin_aarch64_crypto_sha256su0v4si_uuu (w0_3, w4_7); +} + +static __inline uint32x4_t +vsha256su1q_u32 (uint32x4_t tw0_3, uint32x4_t w8_11, uint32x4_t w12_15) +{ + return __builtin_aarch64_crypto_sha256su1v4si_uuuu (tw0_3, w8_11, w12_15); +} + +static __inline poly128_t +vmull_p64 (poly64_t a, poly64_t b) +{ + return + __builtin_aarch64_crypto_pmulldi_ppp (a, b); +} + +static __inline poly128_t +vmull_high_p64 (poly64x2_t a, poly64x2_t b) +{ + return __builtin_aarch64_crypto_pmullv2di_ppp (a, b); +} + +#endif + +/* vshl */ + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vshl_n_s8 (int8x8_t __a, const int __b) +{ + return (int8x8_t) __builtin_aarch64_ashlv8qi (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vshl_n_s16 (int16x4_t __a, const int __b) +{ + return (int16x4_t) __builtin_aarch64_ashlv4hi (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vshl_n_s32 (int32x2_t __a, const int __b) +{ + return (int32x2_t) __builtin_aarch64_ashlv2si (__a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vshl_n_s64 (int64x1_t __a, const int __b) +{ + return (int64x1_t) __builtin_aarch64_ashldi (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vshl_n_u8 (uint8x8_t __a, const int __b) +{ + return (uint8x8_t) __builtin_aarch64_ashlv8qi ((int8x8_t) __a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vshl_n_u16 (uint16x4_t __a, const int __b) +{ + return (uint16x4_t) __builtin_aarch64_ashlv4hi ((int16x4_t) __a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vshl_n_u32 (uint32x2_t __a, const int __b) +{ + return (uint32x2_t) __builtin_aarch64_ashlv2si ((int32x2_t) __a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vshl_n_u64 (uint64x1_t __a, const int __b) +{ + return (uint64x1_t) __builtin_aarch64_ashldi ((int64x1_t) __a, __b); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vshlq_n_s8 (int8x16_t __a, const int __b) +{ + return (int8x16_t) __builtin_aarch64_ashlv16qi (__a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vshlq_n_s16 (int16x8_t __a, const int __b) +{ + return (int16x8_t) __builtin_aarch64_ashlv8hi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vshlq_n_s32 (int32x4_t __a, const int __b) +{ + return (int32x4_t) __builtin_aarch64_ashlv4si (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vshlq_n_s64 (int64x2_t __a, const int __b) +{ + return (int64x2_t) __builtin_aarch64_ashlv2di (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vshlq_n_u8 (uint8x16_t __a, const int __b) +{ + return (uint8x16_t) __builtin_aarch64_ashlv16qi ((int8x16_t) __a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vshlq_n_u16 (uint16x8_t __a, const int __b) +{ + return (uint16x8_t) __builtin_aarch64_ashlv8hi ((int16x8_t) __a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vshlq_n_u32 (uint32x4_t __a, const int __b) +{ + return (uint32x4_t) __builtin_aarch64_ashlv4si ((int32x4_t) __a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vshlq_n_u64 (uint64x2_t __a, const int __b) +{ + return (uint64x2_t) __builtin_aarch64_ashlv2di ((int64x2_t) __a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vshld_n_s64 (int64x1_t __a, const int __b) +{ + return (int64x1_t) __builtin_aarch64_ashldi (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vshld_n_u64 (uint64x1_t __a, const int __b) +{ + return (uint64x1_t) __builtin_aarch64_ashldi (__a, __b); +} + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vshl_s8 (int8x8_t __a, int8x8_t __b) +{ + return (int8x8_t) __builtin_aarch64_sshlv8qi (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vshl_s16 (int16x4_t __a, int16x4_t __b) +{ + return (int16x4_t) __builtin_aarch64_sshlv4hi (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vshl_s32 (int32x2_t __a, int32x2_t __b) +{ + return (int32x2_t) __builtin_aarch64_sshlv2si (__a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vshl_s64 (int64x1_t __a, int64x1_t __b) +{ + return (int64x1_t) __builtin_aarch64_sshldi (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vshl_u8 (uint8x8_t __a, int8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_ushlv8qi ((int8x8_t) __a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vshl_u16 (uint16x4_t __a, int16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_ushlv4hi ((int16x4_t) __a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vshl_u32 (uint32x2_t __a, int32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_ushlv2si ((int32x2_t) __a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vshl_u64 (uint64x1_t __a, int64x1_t __b) +{ + return (uint64x1_t) __builtin_aarch64_ushldi ((int64x1_t) __a, __b); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vshlq_s8 (int8x16_t __a, int8x16_t __b) +{ + return (int8x16_t) __builtin_aarch64_sshlv16qi (__a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vshlq_s16 (int16x8_t __a, int16x8_t __b) +{ + return (int16x8_t) __builtin_aarch64_sshlv8hi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vshlq_s32 (int32x4_t __a, int32x4_t __b) +{ + return (int32x4_t) __builtin_aarch64_sshlv4si (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vshlq_s64 (int64x2_t __a, int64x2_t __b) +{ + return (int64x2_t) __builtin_aarch64_sshlv2di (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vshlq_u8 (uint8x16_t __a, int8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_ushlv16qi ((int8x16_t) __a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vshlq_u16 (uint16x8_t __a, int16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_ushlv8hi ((int16x8_t) __a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vshlq_u32 (uint32x4_t __a, int32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_ushlv4si ((int32x4_t) __a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vshlq_u64 (uint64x2_t __a, int64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_ushlv2di ((int64x2_t) __a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vshld_s64 (int64x1_t __a, int64x1_t __b) +{ + return (int64x1_t) __builtin_aarch64_sshldi (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vshld_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return (uint64x1_t) __builtin_aarch64_ushldi (__a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vshll_high_n_s8 (int8x16_t __a, const int __b) +{ + return __builtin_aarch64_sshll2_nv16qi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vshll_high_n_s16 (int16x8_t __a, const int __b) +{ + return __builtin_aarch64_sshll2_nv8hi (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vshll_high_n_s32 (int32x4_t __a, const int __b) +{ + return __builtin_aarch64_sshll2_nv4si (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vshll_high_n_u8 (uint8x16_t __a, const int __b) +{ + return (uint16x8_t) __builtin_aarch64_ushll2_nv16qi ((int8x16_t) __a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vshll_high_n_u16 (uint16x8_t __a, const int __b) +{ + return (uint32x4_t) __builtin_aarch64_ushll2_nv8hi ((int16x8_t) __a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vshll_high_n_u32 (uint32x4_t __a, const int __b) +{ + return (uint64x2_t) __builtin_aarch64_ushll2_nv4si ((int32x4_t) __a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vshll_n_s8 (int8x8_t __a, const int __b) +{ + return __builtin_aarch64_sshll_nv8qi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vshll_n_s16 (int16x4_t __a, const int __b) +{ + return __builtin_aarch64_sshll_nv4hi (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vshll_n_s32 (int32x2_t __a, const int __b) +{ + return __builtin_aarch64_sshll_nv2si (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vshll_n_u8 (uint8x8_t __a, const int __b) +{ + return (uint16x8_t) __builtin_aarch64_ushll_nv8qi ((int8x8_t) __a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vshll_n_u16 (uint16x4_t __a, const int __b) +{ + return (uint32x4_t) __builtin_aarch64_ushll_nv4hi ((int16x4_t) __a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vshll_n_u32 (uint32x2_t __a, const int __b) +{ + return (uint64x2_t) __builtin_aarch64_ushll_nv2si ((int32x2_t) __a, __b); +} + +/* vshr */ + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vshr_n_s8 (int8x8_t __a, const int __b) +{ + return (int8x8_t) __builtin_aarch64_ashrv8qi (__a, __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vshr_n_s16 (int16x4_t __a, const int __b) +{ + return (int16x4_t) __builtin_aarch64_ashrv4hi (__a, __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vshr_n_s32 (int32x2_t __a, const int __b) +{ + return (int32x2_t) __builtin_aarch64_ashrv2si (__a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vshr_n_s64 (int64x1_t __a, const int __b) +{ + return (int64x1_t) __builtin_aarch64_ashr_simddi (__a, __b); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vshr_n_u8 (uint8x8_t __a, const int __b) +{ + return (uint8x8_t) __builtin_aarch64_lshrv8qi ((int8x8_t) __a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vshr_n_u16 (uint16x4_t __a, const int __b) +{ + return (uint16x4_t) __builtin_aarch64_lshrv4hi ((int16x4_t) __a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vshr_n_u32 (uint32x2_t __a, const int __b) +{ + return (uint32x2_t) __builtin_aarch64_lshrv2si ((int32x2_t) __a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vshr_n_u64 (uint64x1_t __a, const int __b) +{ + return __builtin_aarch64_lshr_simddi_uus ( __a, __b); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vshrq_n_s8 (int8x16_t __a, const int __b) +{ + return (int8x16_t) __builtin_aarch64_ashrv16qi (__a, __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vshrq_n_s16 (int16x8_t __a, const int __b) +{ + return (int16x8_t) __builtin_aarch64_ashrv8hi (__a, __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vshrq_n_s32 (int32x4_t __a, const int __b) +{ + return (int32x4_t) __builtin_aarch64_ashrv4si (__a, __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vshrq_n_s64 (int64x2_t __a, const int __b) +{ + return (int64x2_t) __builtin_aarch64_ashrv2di (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vshrq_n_u8 (uint8x16_t __a, const int __b) +{ + return (uint8x16_t) __builtin_aarch64_lshrv16qi ((int8x16_t) __a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vshrq_n_u16 (uint16x8_t __a, const int __b) +{ + return (uint16x8_t) __builtin_aarch64_lshrv8hi ((int16x8_t) __a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vshrq_n_u32 (uint32x4_t __a, const int __b) +{ + return (uint32x4_t) __builtin_aarch64_lshrv4si ((int32x4_t) __a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vshrq_n_u64 (uint64x2_t __a, const int __b) +{ + return (uint64x2_t) __builtin_aarch64_lshrv2di ((int64x2_t) __a, __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vshrd_n_s64 (int64x1_t __a, const int __b) +{ + return (int64x1_t) __builtin_aarch64_ashr_simddi (__a, __b); +} + +__extension__ static __inline uint64_t __attribute__ ((__always_inline__)) +vshrd_n_u64 (uint64_t __a, const int __b) +{ + return __builtin_aarch64_lshr_simddi_uus (__a, __b); +} + +/* vsli */ + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vsli_n_s8 (int8x8_t __a, int8x8_t __b, const int __c) +{ + return (int8x8_t) __builtin_aarch64_ssli_nv8qi (__a, __b, __c); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vsli_n_s16 (int16x4_t __a, int16x4_t __b, const int __c) +{ + return (int16x4_t) __builtin_aarch64_ssli_nv4hi (__a, __b, __c); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vsli_n_s32 (int32x2_t __a, int32x2_t __b, const int __c) +{ + return (int32x2_t) __builtin_aarch64_ssli_nv2si (__a, __b, __c); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vsli_n_s64 (int64x1_t __a, int64x1_t __b, const int __c) +{ + return (int64x1_t) __builtin_aarch64_ssli_ndi (__a, __b, __c); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vsli_n_u8 (uint8x8_t __a, uint8x8_t __b, const int __c) +{ + return (uint8x8_t) __builtin_aarch64_usli_nv8qi ((int8x8_t) __a, + (int8x8_t) __b, __c); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vsli_n_u16 (uint16x4_t __a, uint16x4_t __b, const int __c) +{ + return (uint16x4_t) __builtin_aarch64_usli_nv4hi ((int16x4_t) __a, + (int16x4_t) __b, __c); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vsli_n_u32 (uint32x2_t __a, uint32x2_t __b, const int __c) +{ + return (uint32x2_t) __builtin_aarch64_usli_nv2si ((int32x2_t) __a, + (int32x2_t) __b, __c); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vsli_n_u64 (uint64x1_t __a, uint64x1_t __b, const int __c) +{ + return (uint64x1_t) __builtin_aarch64_usli_ndi ((int64x1_t) __a, + (int64x1_t) __b, __c); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vsliq_n_s8 (int8x16_t __a, int8x16_t __b, const int __c) +{ + return (int8x16_t) __builtin_aarch64_ssli_nv16qi (__a, __b, __c); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vsliq_n_s16 (int16x8_t __a, int16x8_t __b, const int __c) +{ + return (int16x8_t) __builtin_aarch64_ssli_nv8hi (__a, __b, __c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vsliq_n_s32 (int32x4_t __a, int32x4_t __b, const int __c) +{ + return (int32x4_t) __builtin_aarch64_ssli_nv4si (__a, __b, __c); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vsliq_n_s64 (int64x2_t __a, int64x2_t __b, const int __c) +{ + return (int64x2_t) __builtin_aarch64_ssli_nv2di (__a, __b, __c); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vsliq_n_u8 (uint8x16_t __a, uint8x16_t __b, const int __c) +{ + return (uint8x16_t) __builtin_aarch64_usli_nv16qi ((int8x16_t) __a, + (int8x16_t) __b, __c); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vsliq_n_u16 (uint16x8_t __a, uint16x8_t __b, const int __c) +{ + return (uint16x8_t) __builtin_aarch64_usli_nv8hi ((int16x8_t) __a, + (int16x8_t) __b, __c); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vsliq_n_u32 (uint32x4_t __a, uint32x4_t __b, const int __c) +{ + return (uint32x4_t) __builtin_aarch64_usli_nv4si ((int32x4_t) __a, + (int32x4_t) __b, __c); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vsliq_n_u64 (uint64x2_t __a, uint64x2_t __b, const int __c) +{ + return (uint64x2_t) __builtin_aarch64_usli_nv2di ((int64x2_t) __a, + (int64x2_t) __b, __c); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vslid_n_s64 (int64x1_t __a, int64x1_t __b, const int __c) +{ + return (int64x1_t) __builtin_aarch64_ssli_ndi (__a, __b, __c); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vslid_n_u64 (uint64x1_t __a, uint64x1_t __b, const int __c) +{ + return (uint64x1_t) __builtin_aarch64_usli_ndi (__a, __b, __c); +} + +/* vsqadd */ + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vsqadd_u8 (uint8x8_t __a, int8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_usqaddv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vsqadd_u16 (uint16x4_t __a, int16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_usqaddv4hi ((int16x4_t) __a, + (int16x4_t) __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vsqadd_u32 (uint32x2_t __a, int32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_usqaddv2si ((int32x2_t) __a, + (int32x2_t) __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vsqadd_u64 (uint64x1_t __a, int64x1_t __b) +{ + return (uint64x1_t) __builtin_aarch64_usqadddi ((int64x1_t) __a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vsqaddq_u8 (uint8x16_t __a, int8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_usqaddv16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vsqaddq_u16 (uint16x8_t __a, int16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_usqaddv8hi ((int16x8_t) __a, + (int16x8_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vsqaddq_u32 (uint32x4_t __a, int32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_usqaddv4si ((int32x4_t) __a, + (int32x4_t) __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vsqaddq_u64 (uint64x2_t __a, int64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_usqaddv2di ((int64x2_t) __a, + (int64x2_t) __b); +} + +__extension__ static __inline uint8_t __attribute__ ((__always_inline__)) +vsqaddb_u8 (uint8_t __a, int8_t __b) +{ + return (uint8_t) __builtin_aarch64_usqaddqi ((int8_t) __a, __b); +} + +__extension__ static __inline uint16_t __attribute__ ((__always_inline__)) +vsqaddh_u16 (uint16_t __a, int16_t __b) +{ + return (uint16_t) __builtin_aarch64_usqaddhi ((int16_t) __a, __b); +} + +__extension__ static __inline uint32_t __attribute__ ((__always_inline__)) +vsqadds_u32 (uint32_t __a, int32_t __b) +{ + return (uint32_t) __builtin_aarch64_usqaddsi ((int32_t) __a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vsqaddd_u64 (uint64x1_t __a, int64x1_t __b) +{ + return (uint64x1_t) __builtin_aarch64_usqadddi ((int64x1_t) __a, __b); +} + +/* vsqrt */ +__extension__ static __inline float32x2_t __attribute__ ((__always_inline__)) +vsqrt_f32 (float32x2_t a) +{ + return __builtin_aarch64_sqrtv2sf (a); +} + +__extension__ static __inline float32x4_t __attribute__ ((__always_inline__)) +vsqrtq_f32 (float32x4_t a) +{ + return __builtin_aarch64_sqrtv4sf (a); +} + +__extension__ static __inline float64x2_t __attribute__ ((__always_inline__)) +vsqrtq_f64 (float64x2_t a) +{ + return __builtin_aarch64_sqrtv2df (a); +} + +/* vsra */ + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vsra_n_s8 (int8x8_t __a, int8x8_t __b, const int __c) +{ + return (int8x8_t) __builtin_aarch64_ssra_nv8qi (__a, __b, __c); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vsra_n_s16 (int16x4_t __a, int16x4_t __b, const int __c) +{ + return (int16x4_t) __builtin_aarch64_ssra_nv4hi (__a, __b, __c); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vsra_n_s32 (int32x2_t __a, int32x2_t __b, const int __c) +{ + return (int32x2_t) __builtin_aarch64_ssra_nv2si (__a, __b, __c); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vsra_n_s64 (int64x1_t __a, int64x1_t __b, const int __c) +{ + return (int64x1_t) __builtin_aarch64_ssra_ndi (__a, __b, __c); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vsra_n_u8 (uint8x8_t __a, uint8x8_t __b, const int __c) +{ + return (uint8x8_t) __builtin_aarch64_usra_nv8qi ((int8x8_t) __a, + (int8x8_t) __b, __c); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vsra_n_u16 (uint16x4_t __a, uint16x4_t __b, const int __c) +{ + return (uint16x4_t) __builtin_aarch64_usra_nv4hi ((int16x4_t) __a, + (int16x4_t) __b, __c); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vsra_n_u32 (uint32x2_t __a, uint32x2_t __b, const int __c) +{ + return (uint32x2_t) __builtin_aarch64_usra_nv2si ((int32x2_t) __a, + (int32x2_t) __b, __c); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vsra_n_u64 (uint64x1_t __a, uint64x1_t __b, const int __c) +{ + return (uint64x1_t) __builtin_aarch64_usra_ndi ((int64x1_t) __a, + (int64x1_t) __b, __c); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vsraq_n_s8 (int8x16_t __a, int8x16_t __b, const int __c) +{ + return (int8x16_t) __builtin_aarch64_ssra_nv16qi (__a, __b, __c); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vsraq_n_s16 (int16x8_t __a, int16x8_t __b, const int __c) +{ + return (int16x8_t) __builtin_aarch64_ssra_nv8hi (__a, __b, __c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vsraq_n_s32 (int32x4_t __a, int32x4_t __b, const int __c) +{ + return (int32x4_t) __builtin_aarch64_ssra_nv4si (__a, __b, __c); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vsraq_n_s64 (int64x2_t __a, int64x2_t __b, const int __c) +{ + return (int64x2_t) __builtin_aarch64_ssra_nv2di (__a, __b, __c); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vsraq_n_u8 (uint8x16_t __a, uint8x16_t __b, const int __c) +{ + return (uint8x16_t) __builtin_aarch64_usra_nv16qi ((int8x16_t) __a, + (int8x16_t) __b, __c); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vsraq_n_u16 (uint16x8_t __a, uint16x8_t __b, const int __c) +{ + return (uint16x8_t) __builtin_aarch64_usra_nv8hi ((int16x8_t) __a, + (int16x8_t) __b, __c); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vsraq_n_u32 (uint32x4_t __a, uint32x4_t __b, const int __c) +{ + return (uint32x4_t) __builtin_aarch64_usra_nv4si ((int32x4_t) __a, + (int32x4_t) __b, __c); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vsraq_n_u64 (uint64x2_t __a, uint64x2_t __b, const int __c) +{ + return (uint64x2_t) __builtin_aarch64_usra_nv2di ((int64x2_t) __a, + (int64x2_t) __b, __c); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vsrad_n_s64 (int64x1_t __a, int64x1_t __b, const int __c) +{ + return (int64x1_t) __builtin_aarch64_ssra_ndi (__a, __b, __c); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vsrad_n_u64 (uint64x1_t __a, uint64x1_t __b, const int __c) +{ + return (uint64x1_t) __builtin_aarch64_usra_ndi (__a, __b, __c); +} + +/* vsri */ + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vsri_n_s8 (int8x8_t __a, int8x8_t __b, const int __c) +{ + return (int8x8_t) __builtin_aarch64_ssri_nv8qi (__a, __b, __c); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vsri_n_s16 (int16x4_t __a, int16x4_t __b, const int __c) +{ + return (int16x4_t) __builtin_aarch64_ssri_nv4hi (__a, __b, __c); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vsri_n_s32 (int32x2_t __a, int32x2_t __b, const int __c) +{ + return (int32x2_t) __builtin_aarch64_ssri_nv2si (__a, __b, __c); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vsri_n_s64 (int64x1_t __a, int64x1_t __b, const int __c) +{ + return (int64x1_t) __builtin_aarch64_ssri_ndi (__a, __b, __c); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vsri_n_u8 (uint8x8_t __a, uint8x8_t __b, const int __c) +{ + return (uint8x8_t) __builtin_aarch64_usri_nv8qi ((int8x8_t) __a, + (int8x8_t) __b, __c); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vsri_n_u16 (uint16x4_t __a, uint16x4_t __b, const int __c) +{ + return (uint16x4_t) __builtin_aarch64_usri_nv4hi ((int16x4_t) __a, + (int16x4_t) __b, __c); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vsri_n_u32 (uint32x2_t __a, uint32x2_t __b, const int __c) +{ + return (uint32x2_t) __builtin_aarch64_usri_nv2si ((int32x2_t) __a, + (int32x2_t) __b, __c); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vsri_n_u64 (uint64x1_t __a, uint64x1_t __b, const int __c) +{ + return (uint64x1_t) __builtin_aarch64_usri_ndi ((int64x1_t) __a, + (int64x1_t) __b, __c); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vsriq_n_s8 (int8x16_t __a, int8x16_t __b, const int __c) +{ + return (int8x16_t) __builtin_aarch64_ssri_nv16qi (__a, __b, __c); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vsriq_n_s16 (int16x8_t __a, int16x8_t __b, const int __c) +{ + return (int16x8_t) __builtin_aarch64_ssri_nv8hi (__a, __b, __c); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vsriq_n_s32 (int32x4_t __a, int32x4_t __b, const int __c) +{ + return (int32x4_t) __builtin_aarch64_ssri_nv4si (__a, __b, __c); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vsriq_n_s64 (int64x2_t __a, int64x2_t __b, const int __c) +{ + return (int64x2_t) __builtin_aarch64_ssri_nv2di (__a, __b, __c); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vsriq_n_u8 (uint8x16_t __a, uint8x16_t __b, const int __c) +{ + return (uint8x16_t) __builtin_aarch64_usri_nv16qi ((int8x16_t) __a, + (int8x16_t) __b, __c); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vsriq_n_u16 (uint16x8_t __a, uint16x8_t __b, const int __c) +{ + return (uint16x8_t) __builtin_aarch64_usri_nv8hi ((int16x8_t) __a, + (int16x8_t) __b, __c); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vsriq_n_u32 (uint32x4_t __a, uint32x4_t __b, const int __c) +{ + return (uint32x4_t) __builtin_aarch64_usri_nv4si ((int32x4_t) __a, + (int32x4_t) __b, __c); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vsriq_n_u64 (uint64x2_t __a, uint64x2_t __b, const int __c) +{ + return (uint64x2_t) __builtin_aarch64_usri_nv2di ((int64x2_t) __a, + (int64x2_t) __b, __c); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vsrid_n_s64 (int64x1_t __a, int64x1_t __b, const int __c) +{ + return (int64x1_t) __builtin_aarch64_ssri_ndi (__a, __b, __c); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vsrid_n_u64 (uint64x1_t __a, uint64x1_t __b, const int __c) +{ + return (uint64x1_t) __builtin_aarch64_usri_ndi (__a, __b, __c); +} + +/* vst1 */ + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1_f32 (float32_t *a, float32x2_t b) +{ + __builtin_aarch64_st1v2sf ((__builtin_aarch64_simd_sf *) a, b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1_f64 (float64_t *a, float64x1_t b) +{ + *a = b; +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1_p8 (poly8_t *a, poly8x8_t b) +{ + __builtin_aarch64_st1v8qi ((__builtin_aarch64_simd_qi *) a, + (int8x8_t) b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1_p16 (poly16_t *a, poly16x4_t b) +{ + __builtin_aarch64_st1v4hi ((__builtin_aarch64_simd_hi *) a, + (int16x4_t) b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1_s8 (int8_t *a, int8x8_t b) +{ + __builtin_aarch64_st1v8qi ((__builtin_aarch64_simd_qi *) a, b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1_s16 (int16_t *a, int16x4_t b) +{ + __builtin_aarch64_st1v4hi ((__builtin_aarch64_simd_hi *) a, b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1_s32 (int32_t *a, int32x2_t b) +{ + __builtin_aarch64_st1v2si ((__builtin_aarch64_simd_si *) a, b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1_s64 (int64_t *a, int64x1_t b) +{ + *a = b; +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1_u8 (uint8_t *a, uint8x8_t b) +{ + __builtin_aarch64_st1v8qi ((__builtin_aarch64_simd_qi *) a, + (int8x8_t) b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1_u16 (uint16_t *a, uint16x4_t b) +{ + __builtin_aarch64_st1v4hi ((__builtin_aarch64_simd_hi *) a, + (int16x4_t) b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1_u32 (uint32_t *a, uint32x2_t b) +{ + __builtin_aarch64_st1v2si ((__builtin_aarch64_simd_si *) a, + (int32x2_t) b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1_u64 (uint64_t *a, uint64x1_t b) +{ + *a = b; +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1q_f32 (float32_t *a, float32x4_t b) +{ + __builtin_aarch64_st1v4sf ((__builtin_aarch64_simd_sf *) a, b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1q_f64 (float64_t *a, float64x2_t b) +{ + __builtin_aarch64_st1v2df ((__builtin_aarch64_simd_df *) a, b); +} + +/* vst1q */ + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1q_p8 (poly8_t *a, poly8x16_t b) +{ + __builtin_aarch64_st1v16qi ((__builtin_aarch64_simd_qi *) a, + (int8x16_t) b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1q_p16 (poly16_t *a, poly16x8_t b) +{ + __builtin_aarch64_st1v8hi ((__builtin_aarch64_simd_hi *) a, + (int16x8_t) b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1q_s8 (int8_t *a, int8x16_t b) +{ + __builtin_aarch64_st1v16qi ((__builtin_aarch64_simd_qi *) a, b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1q_s16 (int16_t *a, int16x8_t b) +{ + __builtin_aarch64_st1v8hi ((__builtin_aarch64_simd_hi *) a, b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1q_s32 (int32_t *a, int32x4_t b) +{ + __builtin_aarch64_st1v4si ((__builtin_aarch64_simd_si *) a, b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1q_s64 (int64_t *a, int64x2_t b) +{ + __builtin_aarch64_st1v2di ((__builtin_aarch64_simd_di *) a, b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1q_u8 (uint8_t *a, uint8x16_t b) +{ + __builtin_aarch64_st1v16qi ((__builtin_aarch64_simd_qi *) a, + (int8x16_t) b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1q_u16 (uint16_t *a, uint16x8_t b) +{ + __builtin_aarch64_st1v8hi ((__builtin_aarch64_simd_hi *) a, + (int16x8_t) b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1q_u32 (uint32_t *a, uint32x4_t b) +{ + __builtin_aarch64_st1v4si ((__builtin_aarch64_simd_si *) a, + (int32x4_t) b); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst1q_u64 (uint64_t *a, uint64x2_t b) +{ + __builtin_aarch64_st1v2di ((__builtin_aarch64_simd_di *) a, + (int64x2_t) b); +} + +/* vstn */ + +__extension__ static __inline void +vst2_s64 (int64_t * __a, int64x1x2_t val) +{ + __builtin_aarch64_simd_oi __o; + int64x2x2_t temp; + temp.val[0] = vcombine_s64 (val.val[0], vcreate_s64 (__AARCH64_INT64_C (0))); + temp.val[1] = vcombine_s64 (val.val[1], vcreate_s64 (__AARCH64_INT64_C (0))); + __o = __builtin_aarch64_set_qregoiv2di (__o, (int64x2_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregoiv2di (__o, (int64x2_t) temp.val[1], 1); + __builtin_aarch64_st2di ((__builtin_aarch64_simd_di *) __a, __o); +} + +__extension__ static __inline void +vst2_u64 (uint64_t * __a, uint64x1x2_t val) +{ + __builtin_aarch64_simd_oi __o; + uint64x2x2_t temp; + temp.val[0] = vcombine_u64 (val.val[0], vcreate_u64 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_u64 (val.val[1], vcreate_u64 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregoiv2di (__o, (int64x2_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregoiv2di (__o, (int64x2_t) temp.val[1], 1); + __builtin_aarch64_st2di ((__builtin_aarch64_simd_di *) __a, __o); +} + +__extension__ static __inline void +vst2_f64 (float64_t * __a, float64x1x2_t val) +{ + __builtin_aarch64_simd_oi __o; + float64x2x2_t temp; + temp.val[0] = vcombine_f64 (val.val[0], vcreate_f64 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_f64 (val.val[1], vcreate_f64 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregoiv2df (__o, (float64x2_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregoiv2df (__o, (float64x2_t) temp.val[1], 1); + __builtin_aarch64_st2df ((__builtin_aarch64_simd_df *) __a, __o); +} + +__extension__ static __inline void +vst2_s8 (int8_t * __a, int8x8x2_t val) +{ + __builtin_aarch64_simd_oi __o; + int8x16x2_t temp; + temp.val[0] = vcombine_s8 (val.val[0], vcreate_s8 (__AARCH64_INT64_C (0))); + temp.val[1] = vcombine_s8 (val.val[1], vcreate_s8 (__AARCH64_INT64_C (0))); + __o = __builtin_aarch64_set_qregoiv16qi (__o, (int8x16_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregoiv16qi (__o, (int8x16_t) temp.val[1], 1); + __builtin_aarch64_st2v8qi ((__builtin_aarch64_simd_qi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst2_p8 (poly8_t * __a, poly8x8x2_t val) +{ + __builtin_aarch64_simd_oi __o; + poly8x16x2_t temp; + temp.val[0] = vcombine_p8 (val.val[0], vcreate_p8 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_p8 (val.val[1], vcreate_p8 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregoiv16qi (__o, (int8x16_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregoiv16qi (__o, (int8x16_t) temp.val[1], 1); + __builtin_aarch64_st2v8qi ((__builtin_aarch64_simd_qi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst2_s16 (int16_t * __a, int16x4x2_t val) +{ + __builtin_aarch64_simd_oi __o; + int16x8x2_t temp; + temp.val[0] = vcombine_s16 (val.val[0], vcreate_s16 (__AARCH64_INT64_C (0))); + temp.val[1] = vcombine_s16 (val.val[1], vcreate_s16 (__AARCH64_INT64_C (0))); + __o = __builtin_aarch64_set_qregoiv8hi (__o, (int16x8_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregoiv8hi (__o, (int16x8_t) temp.val[1], 1); + __builtin_aarch64_st2v4hi ((__builtin_aarch64_simd_hi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst2_p16 (poly16_t * __a, poly16x4x2_t val) +{ + __builtin_aarch64_simd_oi __o; + poly16x8x2_t temp; + temp.val[0] = vcombine_p16 (val.val[0], vcreate_p16 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_p16 (val.val[1], vcreate_p16 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregoiv8hi (__o, (int16x8_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregoiv8hi (__o, (int16x8_t) temp.val[1], 1); + __builtin_aarch64_st2v4hi ((__builtin_aarch64_simd_hi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst2_s32 (int32_t * __a, int32x2x2_t val) +{ + __builtin_aarch64_simd_oi __o; + int32x4x2_t temp; + temp.val[0] = vcombine_s32 (val.val[0], vcreate_s32 (__AARCH64_INT64_C (0))); + temp.val[1] = vcombine_s32 (val.val[1], vcreate_s32 (__AARCH64_INT64_C (0))); + __o = __builtin_aarch64_set_qregoiv4si (__o, (int32x4_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregoiv4si (__o, (int32x4_t) temp.val[1], 1); + __builtin_aarch64_st2v2si ((__builtin_aarch64_simd_si *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst2_u8 (uint8_t * __a, uint8x8x2_t val) +{ + __builtin_aarch64_simd_oi __o; + uint8x16x2_t temp; + temp.val[0] = vcombine_u8 (val.val[0], vcreate_u8 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_u8 (val.val[1], vcreate_u8 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregoiv16qi (__o, (int8x16_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregoiv16qi (__o, (int8x16_t) temp.val[1], 1); + __builtin_aarch64_st2v8qi ((__builtin_aarch64_simd_qi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst2_u16 (uint16_t * __a, uint16x4x2_t val) +{ + __builtin_aarch64_simd_oi __o; + uint16x8x2_t temp; + temp.val[0] = vcombine_u16 (val.val[0], vcreate_u16 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_u16 (val.val[1], vcreate_u16 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregoiv8hi (__o, (int16x8_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregoiv8hi (__o, (int16x8_t) temp.val[1], 1); + __builtin_aarch64_st2v4hi ((__builtin_aarch64_simd_hi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst2_u32 (uint32_t * __a, uint32x2x2_t val) +{ + __builtin_aarch64_simd_oi __o; + uint32x4x2_t temp; + temp.val[0] = vcombine_u32 (val.val[0], vcreate_u32 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_u32 (val.val[1], vcreate_u32 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregoiv4si (__o, (int32x4_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregoiv4si (__o, (int32x4_t) temp.val[1], 1); + __builtin_aarch64_st2v2si ((__builtin_aarch64_simd_si *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst2_f32 (float32_t * __a, float32x2x2_t val) +{ + __builtin_aarch64_simd_oi __o; + float32x4x2_t temp; + temp.val[0] = vcombine_f32 (val.val[0], vcreate_f32 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_f32 (val.val[1], vcreate_f32 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregoiv4sf (__o, (float32x4_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregoiv4sf (__o, (float32x4_t) temp.val[1], 1); + __builtin_aarch64_st2v2sf ((__builtin_aarch64_simd_sf *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst2q_s8 (int8_t * __a, int8x16x2_t val) +{ + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_set_qregoiv16qi (__o, (int8x16_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregoiv16qi (__o, (int8x16_t) val.val[1], 1); + __builtin_aarch64_st2v16qi ((__builtin_aarch64_simd_qi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst2q_p8 (poly8_t * __a, poly8x16x2_t val) +{ + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_set_qregoiv16qi (__o, (int8x16_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregoiv16qi (__o, (int8x16_t) val.val[1], 1); + __builtin_aarch64_st2v16qi ((__builtin_aarch64_simd_qi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst2q_s16 (int16_t * __a, int16x8x2_t val) +{ + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_set_qregoiv8hi (__o, (int16x8_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregoiv8hi (__o, (int16x8_t) val.val[1], 1); + __builtin_aarch64_st2v8hi ((__builtin_aarch64_simd_hi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst2q_p16 (poly16_t * __a, poly16x8x2_t val) +{ + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_set_qregoiv8hi (__o, (int16x8_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregoiv8hi (__o, (int16x8_t) val.val[1], 1); + __builtin_aarch64_st2v8hi ((__builtin_aarch64_simd_hi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst2q_s32 (int32_t * __a, int32x4x2_t val) +{ + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_set_qregoiv4si (__o, (int32x4_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregoiv4si (__o, (int32x4_t) val.val[1], 1); + __builtin_aarch64_st2v4si ((__builtin_aarch64_simd_si *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst2q_s64 (int64_t * __a, int64x2x2_t val) +{ + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_set_qregoiv2di (__o, (int64x2_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregoiv2di (__o, (int64x2_t) val.val[1], 1); + __builtin_aarch64_st2v2di ((__builtin_aarch64_simd_di *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst2q_u8 (uint8_t * __a, uint8x16x2_t val) +{ + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_set_qregoiv16qi (__o, (int8x16_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregoiv16qi (__o, (int8x16_t) val.val[1], 1); + __builtin_aarch64_st2v16qi ((__builtin_aarch64_simd_qi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst2q_u16 (uint16_t * __a, uint16x8x2_t val) +{ + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_set_qregoiv8hi (__o, (int16x8_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregoiv8hi (__o, (int16x8_t) val.val[1], 1); + __builtin_aarch64_st2v8hi ((__builtin_aarch64_simd_hi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst2q_u32 (uint32_t * __a, uint32x4x2_t val) +{ + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_set_qregoiv4si (__o, (int32x4_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregoiv4si (__o, (int32x4_t) val.val[1], 1); + __builtin_aarch64_st2v4si ((__builtin_aarch64_simd_si *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst2q_u64 (uint64_t * __a, uint64x2x2_t val) +{ + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_set_qregoiv2di (__o, (int64x2_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregoiv2di (__o, (int64x2_t) val.val[1], 1); + __builtin_aarch64_st2v2di ((__builtin_aarch64_simd_di *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst2q_f32 (float32_t * __a, float32x4x2_t val) +{ + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_set_qregoiv4sf (__o, (float32x4_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregoiv4sf (__o, (float32x4_t) val.val[1], 1); + __builtin_aarch64_st2v4sf ((__builtin_aarch64_simd_sf *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst2q_f64 (float64_t * __a, float64x2x2_t val) +{ + __builtin_aarch64_simd_oi __o; + __o = __builtin_aarch64_set_qregoiv2df (__o, (float64x2_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregoiv2df (__o, (float64x2_t) val.val[1], 1); + __builtin_aarch64_st2v2df ((__builtin_aarch64_simd_df *) __a, __o); +} + +__extension__ static __inline void +vst3_s64 (int64_t * __a, int64x1x3_t val) +{ + __builtin_aarch64_simd_ci __o; + int64x2x3_t temp; + temp.val[0] = vcombine_s64 (val.val[0], vcreate_s64 (__AARCH64_INT64_C (0))); + temp.val[1] = vcombine_s64 (val.val[1], vcreate_s64 (__AARCH64_INT64_C (0))); + temp.val[2] = vcombine_s64 (val.val[2], vcreate_s64 (__AARCH64_INT64_C (0))); + __o = __builtin_aarch64_set_qregciv2di (__o, (int64x2_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregciv2di (__o, (int64x2_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregciv2di (__o, (int64x2_t) temp.val[2], 2); + __builtin_aarch64_st3di ((__builtin_aarch64_simd_di *) __a, __o); +} + +__extension__ static __inline void +vst3_u64 (uint64_t * __a, uint64x1x3_t val) +{ + __builtin_aarch64_simd_ci __o; + uint64x2x3_t temp; + temp.val[0] = vcombine_u64 (val.val[0], vcreate_u64 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_u64 (val.val[1], vcreate_u64 (__AARCH64_UINT64_C (0))); + temp.val[2] = vcombine_u64 (val.val[2], vcreate_u64 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregciv2di (__o, (int64x2_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregciv2di (__o, (int64x2_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregciv2di (__o, (int64x2_t) temp.val[2], 2); + __builtin_aarch64_st3di ((__builtin_aarch64_simd_di *) __a, __o); +} + +__extension__ static __inline void +vst3_f64 (float64_t * __a, float64x1x3_t val) +{ + __builtin_aarch64_simd_ci __o; + float64x2x3_t temp; + temp.val[0] = vcombine_f64 (val.val[0], vcreate_f64 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_f64 (val.val[1], vcreate_f64 (__AARCH64_UINT64_C (0))); + temp.val[2] = vcombine_f64 (val.val[2], vcreate_f64 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregciv2df (__o, (float64x2_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregciv2df (__o, (float64x2_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregciv2df (__o, (float64x2_t) temp.val[2], 2); + __builtin_aarch64_st3df ((__builtin_aarch64_simd_df *) __a, __o); +} + +__extension__ static __inline void +vst3_s8 (int8_t * __a, int8x8x3_t val) +{ + __builtin_aarch64_simd_ci __o; + int8x16x3_t temp; + temp.val[0] = vcombine_s8 (val.val[0], vcreate_s8 (__AARCH64_INT64_C (0))); + temp.val[1] = vcombine_s8 (val.val[1], vcreate_s8 (__AARCH64_INT64_C (0))); + temp.val[2] = vcombine_s8 (val.val[2], vcreate_s8 (__AARCH64_INT64_C (0))); + __o = __builtin_aarch64_set_qregciv16qi (__o, (int8x16_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregciv16qi (__o, (int8x16_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregciv16qi (__o, (int8x16_t) temp.val[2], 2); + __builtin_aarch64_st3v8qi ((__builtin_aarch64_simd_qi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst3_p8 (poly8_t * __a, poly8x8x3_t val) +{ + __builtin_aarch64_simd_ci __o; + poly8x16x3_t temp; + temp.val[0] = vcombine_p8 (val.val[0], vcreate_p8 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_p8 (val.val[1], vcreate_p8 (__AARCH64_UINT64_C (0))); + temp.val[2] = vcombine_p8 (val.val[2], vcreate_p8 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregciv16qi (__o, (int8x16_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregciv16qi (__o, (int8x16_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregciv16qi (__o, (int8x16_t) temp.val[2], 2); + __builtin_aarch64_st3v8qi ((__builtin_aarch64_simd_qi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst3_s16 (int16_t * __a, int16x4x3_t val) +{ + __builtin_aarch64_simd_ci __o; + int16x8x3_t temp; + temp.val[0] = vcombine_s16 (val.val[0], vcreate_s16 (__AARCH64_INT64_C (0))); + temp.val[1] = vcombine_s16 (val.val[1], vcreate_s16 (__AARCH64_INT64_C (0))); + temp.val[2] = vcombine_s16 (val.val[2], vcreate_s16 (__AARCH64_INT64_C (0))); + __o = __builtin_aarch64_set_qregciv8hi (__o, (int16x8_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregciv8hi (__o, (int16x8_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregciv8hi (__o, (int16x8_t) temp.val[2], 2); + __builtin_aarch64_st3v4hi ((__builtin_aarch64_simd_hi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst3_p16 (poly16_t * __a, poly16x4x3_t val) +{ + __builtin_aarch64_simd_ci __o; + poly16x8x3_t temp; + temp.val[0] = vcombine_p16 (val.val[0], vcreate_p16 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_p16 (val.val[1], vcreate_p16 (__AARCH64_UINT64_C (0))); + temp.val[2] = vcombine_p16 (val.val[2], vcreate_p16 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregciv8hi (__o, (int16x8_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregciv8hi (__o, (int16x8_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregciv8hi (__o, (int16x8_t) temp.val[2], 2); + __builtin_aarch64_st3v4hi ((__builtin_aarch64_simd_hi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst3_s32 (int32_t * __a, int32x2x3_t val) +{ + __builtin_aarch64_simd_ci __o; + int32x4x3_t temp; + temp.val[0] = vcombine_s32 (val.val[0], vcreate_s32 (__AARCH64_INT64_C (0))); + temp.val[1] = vcombine_s32 (val.val[1], vcreate_s32 (__AARCH64_INT64_C (0))); + temp.val[2] = vcombine_s32 (val.val[2], vcreate_s32 (__AARCH64_INT64_C (0))); + __o = __builtin_aarch64_set_qregciv4si (__o, (int32x4_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregciv4si (__o, (int32x4_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregciv4si (__o, (int32x4_t) temp.val[2], 2); + __builtin_aarch64_st3v2si ((__builtin_aarch64_simd_si *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst3_u8 (uint8_t * __a, uint8x8x3_t val) +{ + __builtin_aarch64_simd_ci __o; + uint8x16x3_t temp; + temp.val[0] = vcombine_u8 (val.val[0], vcreate_u8 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_u8 (val.val[1], vcreate_u8 (__AARCH64_UINT64_C (0))); + temp.val[2] = vcombine_u8 (val.val[2], vcreate_u8 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregciv16qi (__o, (int8x16_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregciv16qi (__o, (int8x16_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregciv16qi (__o, (int8x16_t) temp.val[2], 2); + __builtin_aarch64_st3v8qi ((__builtin_aarch64_simd_qi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst3_u16 (uint16_t * __a, uint16x4x3_t val) +{ + __builtin_aarch64_simd_ci __o; + uint16x8x3_t temp; + temp.val[0] = vcombine_u16 (val.val[0], vcreate_u16 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_u16 (val.val[1], vcreate_u16 (__AARCH64_UINT64_C (0))); + temp.val[2] = vcombine_u16 (val.val[2], vcreate_u16 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregciv8hi (__o, (int16x8_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregciv8hi (__o, (int16x8_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregciv8hi (__o, (int16x8_t) temp.val[2], 2); + __builtin_aarch64_st3v4hi ((__builtin_aarch64_simd_hi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst3_u32 (uint32_t * __a, uint32x2x3_t val) +{ + __builtin_aarch64_simd_ci __o; + uint32x4x3_t temp; + temp.val[0] = vcombine_u32 (val.val[0], vcreate_u32 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_u32 (val.val[1], vcreate_u32 (__AARCH64_UINT64_C (0))); + temp.val[2] = vcombine_u32 (val.val[2], vcreate_u32 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregciv4si (__o, (int32x4_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregciv4si (__o, (int32x4_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregciv4si (__o, (int32x4_t) temp.val[2], 2); + __builtin_aarch64_st3v2si ((__builtin_aarch64_simd_si *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst3_f32 (float32_t * __a, float32x2x3_t val) +{ + __builtin_aarch64_simd_ci __o; + float32x4x3_t temp; + temp.val[0] = vcombine_f32 (val.val[0], vcreate_f32 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_f32 (val.val[1], vcreate_f32 (__AARCH64_UINT64_C (0))); + temp.val[2] = vcombine_f32 (val.val[2], vcreate_f32 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregciv4sf (__o, (float32x4_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregciv4sf (__o, (float32x4_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregciv4sf (__o, (float32x4_t) temp.val[2], 2); + __builtin_aarch64_st3v2sf ((__builtin_aarch64_simd_sf *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst3q_s8 (int8_t * __a, int8x16x3_t val) +{ + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_set_qregciv16qi (__o, (int8x16_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregciv16qi (__o, (int8x16_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregciv16qi (__o, (int8x16_t) val.val[2], 2); + __builtin_aarch64_st3v16qi ((__builtin_aarch64_simd_qi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst3q_p8 (poly8_t * __a, poly8x16x3_t val) +{ + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_set_qregciv16qi (__o, (int8x16_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregciv16qi (__o, (int8x16_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregciv16qi (__o, (int8x16_t) val.val[2], 2); + __builtin_aarch64_st3v16qi ((__builtin_aarch64_simd_qi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst3q_s16 (int16_t * __a, int16x8x3_t val) +{ + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_set_qregciv8hi (__o, (int16x8_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregciv8hi (__o, (int16x8_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregciv8hi (__o, (int16x8_t) val.val[2], 2); + __builtin_aarch64_st3v8hi ((__builtin_aarch64_simd_hi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst3q_p16 (poly16_t * __a, poly16x8x3_t val) +{ + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_set_qregciv8hi (__o, (int16x8_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregciv8hi (__o, (int16x8_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregciv8hi (__o, (int16x8_t) val.val[2], 2); + __builtin_aarch64_st3v8hi ((__builtin_aarch64_simd_hi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst3q_s32 (int32_t * __a, int32x4x3_t val) +{ + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_set_qregciv4si (__o, (int32x4_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregciv4si (__o, (int32x4_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregciv4si (__o, (int32x4_t) val.val[2], 2); + __builtin_aarch64_st3v4si ((__builtin_aarch64_simd_si *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst3q_s64 (int64_t * __a, int64x2x3_t val) +{ + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_set_qregciv2di (__o, (int64x2_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregciv2di (__o, (int64x2_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregciv2di (__o, (int64x2_t) val.val[2], 2); + __builtin_aarch64_st3v2di ((__builtin_aarch64_simd_di *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst3q_u8 (uint8_t * __a, uint8x16x3_t val) +{ + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_set_qregciv16qi (__o, (int8x16_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregciv16qi (__o, (int8x16_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregciv16qi (__o, (int8x16_t) val.val[2], 2); + __builtin_aarch64_st3v16qi ((__builtin_aarch64_simd_qi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst3q_u16 (uint16_t * __a, uint16x8x3_t val) +{ + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_set_qregciv8hi (__o, (int16x8_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregciv8hi (__o, (int16x8_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregciv8hi (__o, (int16x8_t) val.val[2], 2); + __builtin_aarch64_st3v8hi ((__builtin_aarch64_simd_hi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst3q_u32 (uint32_t * __a, uint32x4x3_t val) +{ + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_set_qregciv4si (__o, (int32x4_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregciv4si (__o, (int32x4_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregciv4si (__o, (int32x4_t) val.val[2], 2); + __builtin_aarch64_st3v4si ((__builtin_aarch64_simd_si *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst3q_u64 (uint64_t * __a, uint64x2x3_t val) +{ + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_set_qregciv2di (__o, (int64x2_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregciv2di (__o, (int64x2_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregciv2di (__o, (int64x2_t) val.val[2], 2); + __builtin_aarch64_st3v2di ((__builtin_aarch64_simd_di *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst3q_f32 (float32_t * __a, float32x4x3_t val) +{ + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_set_qregciv4sf (__o, (float32x4_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregciv4sf (__o, (float32x4_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregciv4sf (__o, (float32x4_t) val.val[2], 2); + __builtin_aarch64_st3v4sf ((__builtin_aarch64_simd_sf *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst3q_f64 (float64_t * __a, float64x2x3_t val) +{ + __builtin_aarch64_simd_ci __o; + __o = __builtin_aarch64_set_qregciv2df (__o, (float64x2_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregciv2df (__o, (float64x2_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregciv2df (__o, (float64x2_t) val.val[2], 2); + __builtin_aarch64_st3v2df ((__builtin_aarch64_simd_df *) __a, __o); +} + +__extension__ static __inline void +vst4_s64 (int64_t * __a, int64x1x4_t val) +{ + __builtin_aarch64_simd_xi __o; + int64x2x4_t temp; + temp.val[0] = vcombine_s64 (val.val[0], vcreate_s64 (__AARCH64_INT64_C (0))); + temp.val[1] = vcombine_s64 (val.val[1], vcreate_s64 (__AARCH64_INT64_C (0))); + temp.val[2] = vcombine_s64 (val.val[2], vcreate_s64 (__AARCH64_INT64_C (0))); + temp.val[3] = vcombine_s64 (val.val[3], vcreate_s64 (__AARCH64_INT64_C (0))); + __o = __builtin_aarch64_set_qregxiv2di (__o, (int64x2_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregxiv2di (__o, (int64x2_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregxiv2di (__o, (int64x2_t) temp.val[2], 2); + __o = __builtin_aarch64_set_qregxiv2di (__o, (int64x2_t) temp.val[3], 3); + __builtin_aarch64_st4di ((__builtin_aarch64_simd_di *) __a, __o); +} + +__extension__ static __inline void +vst4_u64 (uint64_t * __a, uint64x1x4_t val) +{ + __builtin_aarch64_simd_xi __o; + uint64x2x4_t temp; + temp.val[0] = vcombine_u64 (val.val[0], vcreate_u64 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_u64 (val.val[1], vcreate_u64 (__AARCH64_UINT64_C (0))); + temp.val[2] = vcombine_u64 (val.val[2], vcreate_u64 (__AARCH64_UINT64_C (0))); + temp.val[3] = vcombine_u64 (val.val[3], vcreate_u64 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregxiv2di (__o, (int64x2_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregxiv2di (__o, (int64x2_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregxiv2di (__o, (int64x2_t) temp.val[2], 2); + __o = __builtin_aarch64_set_qregxiv2di (__o, (int64x2_t) temp.val[3], 3); + __builtin_aarch64_st4di ((__builtin_aarch64_simd_di *) __a, __o); +} + +__extension__ static __inline void +vst4_f64 (float64_t * __a, float64x1x4_t val) +{ + __builtin_aarch64_simd_xi __o; + float64x2x4_t temp; + temp.val[0] = vcombine_f64 (val.val[0], vcreate_f64 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_f64 (val.val[1], vcreate_f64 (__AARCH64_UINT64_C (0))); + temp.val[2] = vcombine_f64 (val.val[2], vcreate_f64 (__AARCH64_UINT64_C (0))); + temp.val[3] = vcombine_f64 (val.val[3], vcreate_f64 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregxiv2df (__o, (float64x2_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregxiv2df (__o, (float64x2_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregxiv2df (__o, (float64x2_t) temp.val[2], 2); + __o = __builtin_aarch64_set_qregxiv2df (__o, (float64x2_t) temp.val[3], 3); + __builtin_aarch64_st4df ((__builtin_aarch64_simd_df *) __a, __o); +} + +__extension__ static __inline void +vst4_s8 (int8_t * __a, int8x8x4_t val) +{ + __builtin_aarch64_simd_xi __o; + int8x16x4_t temp; + temp.val[0] = vcombine_s8 (val.val[0], vcreate_s8 (__AARCH64_INT64_C (0))); + temp.val[1] = vcombine_s8 (val.val[1], vcreate_s8 (__AARCH64_INT64_C (0))); + temp.val[2] = vcombine_s8 (val.val[2], vcreate_s8 (__AARCH64_INT64_C (0))); + temp.val[3] = vcombine_s8 (val.val[3], vcreate_s8 (__AARCH64_INT64_C (0))); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) temp.val[2], 2); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) temp.val[3], 3); + __builtin_aarch64_st4v8qi ((__builtin_aarch64_simd_qi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst4_p8 (poly8_t * __a, poly8x8x4_t val) +{ + __builtin_aarch64_simd_xi __o; + poly8x16x4_t temp; + temp.val[0] = vcombine_p8 (val.val[0], vcreate_p8 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_p8 (val.val[1], vcreate_p8 (__AARCH64_UINT64_C (0))); + temp.val[2] = vcombine_p8 (val.val[2], vcreate_p8 (__AARCH64_UINT64_C (0))); + temp.val[3] = vcombine_p8 (val.val[3], vcreate_p8 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) temp.val[2], 2); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) temp.val[3], 3); + __builtin_aarch64_st4v8qi ((__builtin_aarch64_simd_qi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst4_s16 (int16_t * __a, int16x4x4_t val) +{ + __builtin_aarch64_simd_xi __o; + int16x8x4_t temp; + temp.val[0] = vcombine_s16 (val.val[0], vcreate_s16 (__AARCH64_INT64_C (0))); + temp.val[1] = vcombine_s16 (val.val[1], vcreate_s16 (__AARCH64_INT64_C (0))); + temp.val[2] = vcombine_s16 (val.val[2], vcreate_s16 (__AARCH64_INT64_C (0))); + temp.val[3] = vcombine_s16 (val.val[3], vcreate_s16 (__AARCH64_INT64_C (0))); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) temp.val[2], 2); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) temp.val[3], 3); + __builtin_aarch64_st4v4hi ((__builtin_aarch64_simd_hi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst4_p16 (poly16_t * __a, poly16x4x4_t val) +{ + __builtin_aarch64_simd_xi __o; + poly16x8x4_t temp; + temp.val[0] = vcombine_p16 (val.val[0], vcreate_p16 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_p16 (val.val[1], vcreate_p16 (__AARCH64_UINT64_C (0))); + temp.val[2] = vcombine_p16 (val.val[2], vcreate_p16 (__AARCH64_UINT64_C (0))); + temp.val[3] = vcombine_p16 (val.val[3], vcreate_p16 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) temp.val[2], 2); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) temp.val[3], 3); + __builtin_aarch64_st4v4hi ((__builtin_aarch64_simd_hi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst4_s32 (int32_t * __a, int32x2x4_t val) +{ + __builtin_aarch64_simd_xi __o; + int32x4x4_t temp; + temp.val[0] = vcombine_s32 (val.val[0], vcreate_s32 (__AARCH64_INT64_C (0))); + temp.val[1] = vcombine_s32 (val.val[1], vcreate_s32 (__AARCH64_INT64_C (0))); + temp.val[2] = vcombine_s32 (val.val[2], vcreate_s32 (__AARCH64_INT64_C (0))); + temp.val[3] = vcombine_s32 (val.val[3], vcreate_s32 (__AARCH64_INT64_C (0))); + __o = __builtin_aarch64_set_qregxiv4si (__o, (int32x4_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregxiv4si (__o, (int32x4_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregxiv4si (__o, (int32x4_t) temp.val[2], 2); + __o = __builtin_aarch64_set_qregxiv4si (__o, (int32x4_t) temp.val[3], 3); + __builtin_aarch64_st4v2si ((__builtin_aarch64_simd_si *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst4_u8 (uint8_t * __a, uint8x8x4_t val) +{ + __builtin_aarch64_simd_xi __o; + uint8x16x4_t temp; + temp.val[0] = vcombine_u8 (val.val[0], vcreate_u8 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_u8 (val.val[1], vcreate_u8 (__AARCH64_UINT64_C (0))); + temp.val[2] = vcombine_u8 (val.val[2], vcreate_u8 (__AARCH64_UINT64_C (0))); + temp.val[3] = vcombine_u8 (val.val[3], vcreate_u8 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) temp.val[2], 2); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) temp.val[3], 3); + __builtin_aarch64_st4v8qi ((__builtin_aarch64_simd_qi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst4_u16 (uint16_t * __a, uint16x4x4_t val) +{ + __builtin_aarch64_simd_xi __o; + uint16x8x4_t temp; + temp.val[0] = vcombine_u16 (val.val[0], vcreate_u16 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_u16 (val.val[1], vcreate_u16 (__AARCH64_UINT64_C (0))); + temp.val[2] = vcombine_u16 (val.val[2], vcreate_u16 (__AARCH64_UINT64_C (0))); + temp.val[3] = vcombine_u16 (val.val[3], vcreate_u16 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) temp.val[2], 2); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) temp.val[3], 3); + __builtin_aarch64_st4v4hi ((__builtin_aarch64_simd_hi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst4_u32 (uint32_t * __a, uint32x2x4_t val) +{ + __builtin_aarch64_simd_xi __o; + uint32x4x4_t temp; + temp.val[0] = vcombine_u32 (val.val[0], vcreate_u32 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_u32 (val.val[1], vcreate_u32 (__AARCH64_UINT64_C (0))); + temp.val[2] = vcombine_u32 (val.val[2], vcreate_u32 (__AARCH64_UINT64_C (0))); + temp.val[3] = vcombine_u32 (val.val[3], vcreate_u32 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregxiv4si (__o, (int32x4_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregxiv4si (__o, (int32x4_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregxiv4si (__o, (int32x4_t) temp.val[2], 2); + __o = __builtin_aarch64_set_qregxiv4si (__o, (int32x4_t) temp.val[3], 3); + __builtin_aarch64_st4v2si ((__builtin_aarch64_simd_si *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst4_f32 (float32_t * __a, float32x2x4_t val) +{ + __builtin_aarch64_simd_xi __o; + float32x4x4_t temp; + temp.val[0] = vcombine_f32 (val.val[0], vcreate_f32 (__AARCH64_UINT64_C (0))); + temp.val[1] = vcombine_f32 (val.val[1], vcreate_f32 (__AARCH64_UINT64_C (0))); + temp.val[2] = vcombine_f32 (val.val[2], vcreate_f32 (__AARCH64_UINT64_C (0))); + temp.val[3] = vcombine_f32 (val.val[3], vcreate_f32 (__AARCH64_UINT64_C (0))); + __o = __builtin_aarch64_set_qregxiv4sf (__o, (float32x4_t) temp.val[0], 0); + __o = __builtin_aarch64_set_qregxiv4sf (__o, (float32x4_t) temp.val[1], 1); + __o = __builtin_aarch64_set_qregxiv4sf (__o, (float32x4_t) temp.val[2], 2); + __o = __builtin_aarch64_set_qregxiv4sf (__o, (float32x4_t) temp.val[3], 3); + __builtin_aarch64_st4v2sf ((__builtin_aarch64_simd_sf *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst4q_s8 (int8_t * __a, int8x16x4_t val) +{ + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) val.val[2], 2); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) val.val[3], 3); + __builtin_aarch64_st4v16qi ((__builtin_aarch64_simd_qi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst4q_p8 (poly8_t * __a, poly8x16x4_t val) +{ + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) val.val[2], 2); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) val.val[3], 3); + __builtin_aarch64_st4v16qi ((__builtin_aarch64_simd_qi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst4q_s16 (int16_t * __a, int16x8x4_t val) +{ + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) val.val[2], 2); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) val.val[3], 3); + __builtin_aarch64_st4v8hi ((__builtin_aarch64_simd_hi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst4q_p16 (poly16_t * __a, poly16x8x4_t val) +{ + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) val.val[2], 2); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) val.val[3], 3); + __builtin_aarch64_st4v8hi ((__builtin_aarch64_simd_hi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst4q_s32 (int32_t * __a, int32x4x4_t val) +{ + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_set_qregxiv4si (__o, (int32x4_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregxiv4si (__o, (int32x4_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregxiv4si (__o, (int32x4_t) val.val[2], 2); + __o = __builtin_aarch64_set_qregxiv4si (__o, (int32x4_t) val.val[3], 3); + __builtin_aarch64_st4v4si ((__builtin_aarch64_simd_si *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst4q_s64 (int64_t * __a, int64x2x4_t val) +{ + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_set_qregxiv2di (__o, (int64x2_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregxiv2di (__o, (int64x2_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregxiv2di (__o, (int64x2_t) val.val[2], 2); + __o = __builtin_aarch64_set_qregxiv2di (__o, (int64x2_t) val.val[3], 3); + __builtin_aarch64_st4v2di ((__builtin_aarch64_simd_di *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst4q_u8 (uint8_t * __a, uint8x16x4_t val) +{ + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) val.val[2], 2); + __o = __builtin_aarch64_set_qregxiv16qi (__o, (int8x16_t) val.val[3], 3); + __builtin_aarch64_st4v16qi ((__builtin_aarch64_simd_qi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst4q_u16 (uint16_t * __a, uint16x8x4_t val) +{ + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) val.val[2], 2); + __o = __builtin_aarch64_set_qregxiv8hi (__o, (int16x8_t) val.val[3], 3); + __builtin_aarch64_st4v8hi ((__builtin_aarch64_simd_hi *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst4q_u32 (uint32_t * __a, uint32x4x4_t val) +{ + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_set_qregxiv4si (__o, (int32x4_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregxiv4si (__o, (int32x4_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregxiv4si (__o, (int32x4_t) val.val[2], 2); + __o = __builtin_aarch64_set_qregxiv4si (__o, (int32x4_t) val.val[3], 3); + __builtin_aarch64_st4v4si ((__builtin_aarch64_simd_si *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst4q_u64 (uint64_t * __a, uint64x2x4_t val) +{ + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_set_qregxiv2di (__o, (int64x2_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregxiv2di (__o, (int64x2_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregxiv2di (__o, (int64x2_t) val.val[2], 2); + __o = __builtin_aarch64_set_qregxiv2di (__o, (int64x2_t) val.val[3], 3); + __builtin_aarch64_st4v2di ((__builtin_aarch64_simd_di *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst4q_f32 (float32_t * __a, float32x4x4_t val) +{ + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_set_qregxiv4sf (__o, (float32x4_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregxiv4sf (__o, (float32x4_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregxiv4sf (__o, (float32x4_t) val.val[2], 2); + __o = __builtin_aarch64_set_qregxiv4sf (__o, (float32x4_t) val.val[3], 3); + __builtin_aarch64_st4v4sf ((__builtin_aarch64_simd_sf *) __a, __o); +} + +__extension__ static __inline void __attribute__ ((__always_inline__)) +vst4q_f64 (float64_t * __a, float64x2x4_t val) +{ + __builtin_aarch64_simd_xi __o; + __o = __builtin_aarch64_set_qregxiv2df (__o, (float64x2_t) val.val[0], 0); + __o = __builtin_aarch64_set_qregxiv2df (__o, (float64x2_t) val.val[1], 1); + __o = __builtin_aarch64_set_qregxiv2df (__o, (float64x2_t) val.val[2], 2); + __o = __builtin_aarch64_set_qregxiv2df (__o, (float64x2_t) val.val[3], 3); + __builtin_aarch64_st4v2df ((__builtin_aarch64_simd_df *) __a, __o); +} + +/* vsub */ + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vsubd_s64 (int64x1_t __a, int64x1_t __b) +{ + return __a - __b; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vsubd_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return __a - __b; +} + +/* vtbx1 */ + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vtbx1_s8 (int8x8_t __r, int8x8_t __tab, int8x8_t __idx) +{ + uint8x8_t __mask = vclt_u8 (vreinterpret_u8_s8 (__idx), + vmov_n_u8 (8)); + int8x8_t __tbl = vtbl1_s8 (__tab, __idx); + + return vbsl_s8 (__mask, __tbl, __r); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vtbx1_u8 (uint8x8_t __r, uint8x8_t __tab, uint8x8_t __idx) +{ + uint8x8_t __mask = vclt_u8 (__idx, vmov_n_u8 (8)); + uint8x8_t __tbl = vtbl1_u8 (__tab, __idx); + + return vbsl_u8 (__mask, __tbl, __r); +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vtbx1_p8 (poly8x8_t __r, poly8x8_t __tab, uint8x8_t __idx) +{ + uint8x8_t __mask = vclt_u8 (__idx, vmov_n_u8 (8)); + poly8x8_t __tbl = vtbl1_p8 (__tab, __idx); + + return vbsl_p8 (__mask, __tbl, __r); +} + +/* vtbx3 */ + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vtbx3_s8 (int8x8_t __r, int8x8x3_t __tab, int8x8_t __idx) +{ + uint8x8_t __mask = vclt_u8 (vreinterpret_u8_s8 (__idx), + vmov_n_u8 (24)); + int8x8_t __tbl = vtbl3_s8 (__tab, __idx); + + return vbsl_s8 (__mask, __tbl, __r); +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vtbx3_u8 (uint8x8_t __r, uint8x8x3_t __tab, uint8x8_t __idx) +{ + uint8x8_t __mask = vclt_u8 (__idx, vmov_n_u8 (24)); + uint8x8_t __tbl = vtbl3_u8 (__tab, __idx); + + return vbsl_u8 (__mask, __tbl, __r); +} + +__extension__ static __inline poly8x8_t __attribute__ ((__always_inline__)) +vtbx3_p8 (poly8x8_t __r, poly8x8x3_t __tab, uint8x8_t __idx) +{ + uint8x8_t __mask = vclt_u8 (__idx, vmov_n_u8 (24)); + poly8x8_t __tbl = vtbl3_p8 (__tab, __idx); + + return vbsl_p8 (__mask, __tbl, __r); +} + +/* vtrn */ + +__extension__ static __inline float32x2x2_t __attribute__ ((__always_inline__)) +vtrn_f32 (float32x2_t a, float32x2_t b) +{ + return (float32x2x2_t) {vtrn1_f32 (a, b), vtrn2_f32 (a, b)}; +} + +__extension__ static __inline poly8x8x2_t __attribute__ ((__always_inline__)) +vtrn_p8 (poly8x8_t a, poly8x8_t b) +{ + return (poly8x8x2_t) {vtrn1_p8 (a, b), vtrn2_p8 (a, b)}; +} + +__extension__ static __inline poly16x4x2_t __attribute__ ((__always_inline__)) +vtrn_p16 (poly16x4_t a, poly16x4_t b) +{ + return (poly16x4x2_t) {vtrn1_p16 (a, b), vtrn2_p16 (a, b)}; +} + +__extension__ static __inline int8x8x2_t __attribute__ ((__always_inline__)) +vtrn_s8 (int8x8_t a, int8x8_t b) +{ + return (int8x8x2_t) {vtrn1_s8 (a, b), vtrn2_s8 (a, b)}; +} + +__extension__ static __inline int16x4x2_t __attribute__ ((__always_inline__)) +vtrn_s16 (int16x4_t a, int16x4_t b) +{ + return (int16x4x2_t) {vtrn1_s16 (a, b), vtrn2_s16 (a, b)}; +} + +__extension__ static __inline int32x2x2_t __attribute__ ((__always_inline__)) +vtrn_s32 (int32x2_t a, int32x2_t b) +{ + return (int32x2x2_t) {vtrn1_s32 (a, b), vtrn2_s32 (a, b)}; +} + +__extension__ static __inline uint8x8x2_t __attribute__ ((__always_inline__)) +vtrn_u8 (uint8x8_t a, uint8x8_t b) +{ + return (uint8x8x2_t) {vtrn1_u8 (a, b), vtrn2_u8 (a, b)}; +} + +__extension__ static __inline uint16x4x2_t __attribute__ ((__always_inline__)) +vtrn_u16 (uint16x4_t a, uint16x4_t b) +{ + return (uint16x4x2_t) {vtrn1_u16 (a, b), vtrn2_u16 (a, b)}; +} + +__extension__ static __inline uint32x2x2_t __attribute__ ((__always_inline__)) +vtrn_u32 (uint32x2_t a, uint32x2_t b) +{ + return (uint32x2x2_t) {vtrn1_u32 (a, b), vtrn2_u32 (a, b)}; +} + +__extension__ static __inline float32x4x2_t __attribute__ ((__always_inline__)) +vtrnq_f32 (float32x4_t a, float32x4_t b) +{ + return (float32x4x2_t) {vtrn1q_f32 (a, b), vtrn2q_f32 (a, b)}; +} + +__extension__ static __inline poly8x16x2_t __attribute__ ((__always_inline__)) +vtrnq_p8 (poly8x16_t a, poly8x16_t b) +{ + return (poly8x16x2_t) {vtrn1q_p8 (a, b), vtrn2q_p8 (a, b)}; +} + +__extension__ static __inline poly16x8x2_t __attribute__ ((__always_inline__)) +vtrnq_p16 (poly16x8_t a, poly16x8_t b) +{ + return (poly16x8x2_t) {vtrn1q_p16 (a, b), vtrn2q_p16 (a, b)}; +} + +__extension__ static __inline int8x16x2_t __attribute__ ((__always_inline__)) +vtrnq_s8 (int8x16_t a, int8x16_t b) +{ + return (int8x16x2_t) {vtrn1q_s8 (a, b), vtrn2q_s8 (a, b)}; +} + +__extension__ static __inline int16x8x2_t __attribute__ ((__always_inline__)) +vtrnq_s16 (int16x8_t a, int16x8_t b) +{ + return (int16x8x2_t) {vtrn1q_s16 (a, b), vtrn2q_s16 (a, b)}; +} + +__extension__ static __inline int32x4x2_t __attribute__ ((__always_inline__)) +vtrnq_s32 (int32x4_t a, int32x4_t b) +{ + return (int32x4x2_t) {vtrn1q_s32 (a, b), vtrn2q_s32 (a, b)}; +} + +__extension__ static __inline uint8x16x2_t __attribute__ ((__always_inline__)) +vtrnq_u8 (uint8x16_t a, uint8x16_t b) +{ + return (uint8x16x2_t) {vtrn1q_u8 (a, b), vtrn2q_u8 (a, b)}; +} + +__extension__ static __inline uint16x8x2_t __attribute__ ((__always_inline__)) +vtrnq_u16 (uint16x8_t a, uint16x8_t b) +{ + return (uint16x8x2_t) {vtrn1q_u16 (a, b), vtrn2q_u16 (a, b)}; +} + +__extension__ static __inline uint32x4x2_t __attribute__ ((__always_inline__)) +vtrnq_u32 (uint32x4_t a, uint32x4_t b) +{ + return (uint32x4x2_t) {vtrn1q_u32 (a, b), vtrn2q_u32 (a, b)}; +} + +/* vtst */ + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vtst_s8 (int8x8_t __a, int8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_cmtstv8qi (__a, __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vtst_s16 (int16x4_t __a, int16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_cmtstv4hi (__a, __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vtst_s32 (int32x2_t __a, int32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_cmtstv2si (__a, __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vtst_s64 (int64x1_t __a, int64x1_t __b) +{ + return (__a & __b) ? -1ll : 0ll; +} + +__extension__ static __inline uint8x8_t __attribute__ ((__always_inline__)) +vtst_u8 (uint8x8_t __a, uint8x8_t __b) +{ + return (uint8x8_t) __builtin_aarch64_cmtstv8qi ((int8x8_t) __a, + (int8x8_t) __b); +} + +__extension__ static __inline uint16x4_t __attribute__ ((__always_inline__)) +vtst_u16 (uint16x4_t __a, uint16x4_t __b) +{ + return (uint16x4_t) __builtin_aarch64_cmtstv4hi ((int16x4_t) __a, + (int16x4_t) __b); +} + +__extension__ static __inline uint32x2_t __attribute__ ((__always_inline__)) +vtst_u32 (uint32x2_t __a, uint32x2_t __b) +{ + return (uint32x2_t) __builtin_aarch64_cmtstv2si ((int32x2_t) __a, + (int32x2_t) __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vtst_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return (__a & __b) ? -1ll : 0ll; +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vtstq_s8 (int8x16_t __a, int8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_cmtstv16qi (__a, __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vtstq_s16 (int16x8_t __a, int16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_cmtstv8hi (__a, __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vtstq_s32 (int32x4_t __a, int32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_cmtstv4si (__a, __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vtstq_s64 (int64x2_t __a, int64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_cmtstv2di (__a, __b); +} + +__extension__ static __inline uint8x16_t __attribute__ ((__always_inline__)) +vtstq_u8 (uint8x16_t __a, uint8x16_t __b) +{ + return (uint8x16_t) __builtin_aarch64_cmtstv16qi ((int8x16_t) __a, + (int8x16_t) __b); +} + +__extension__ static __inline uint16x8_t __attribute__ ((__always_inline__)) +vtstq_u16 (uint16x8_t __a, uint16x8_t __b) +{ + return (uint16x8_t) __builtin_aarch64_cmtstv8hi ((int16x8_t) __a, + (int16x8_t) __b); +} + +__extension__ static __inline uint32x4_t __attribute__ ((__always_inline__)) +vtstq_u32 (uint32x4_t __a, uint32x4_t __b) +{ + return (uint32x4_t) __builtin_aarch64_cmtstv4si ((int32x4_t) __a, + (int32x4_t) __b); +} + +__extension__ static __inline uint64x2_t __attribute__ ((__always_inline__)) +vtstq_u64 (uint64x2_t __a, uint64x2_t __b) +{ + return (uint64x2_t) __builtin_aarch64_cmtstv2di ((int64x2_t) __a, + (int64x2_t) __b); +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vtstd_s64 (int64x1_t __a, int64x1_t __b) +{ + return (__a & __b) ? -1ll : 0ll; +} + +__extension__ static __inline uint64x1_t __attribute__ ((__always_inline__)) +vtstd_u64 (uint64x1_t __a, uint64x1_t __b) +{ + return (__a & __b) ? -1ll : 0ll; +} + +/* vuqadd */ + +__extension__ static __inline int8x8_t __attribute__ ((__always_inline__)) +vuqadd_s8 (int8x8_t __a, uint8x8_t __b) +{ + return (int8x8_t) __builtin_aarch64_suqaddv8qi (__a, (int8x8_t) __b); +} + +__extension__ static __inline int16x4_t __attribute__ ((__always_inline__)) +vuqadd_s16 (int16x4_t __a, uint16x4_t __b) +{ + return (int16x4_t) __builtin_aarch64_suqaddv4hi (__a, (int16x4_t) __b); +} + +__extension__ static __inline int32x2_t __attribute__ ((__always_inline__)) +vuqadd_s32 (int32x2_t __a, uint32x2_t __b) +{ + return (int32x2_t) __builtin_aarch64_suqaddv2si (__a, (int32x2_t) __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vuqadd_s64 (int64x1_t __a, uint64x1_t __b) +{ + return (int64x1_t) __builtin_aarch64_suqadddi (__a, (int64x1_t) __b); +} + +__extension__ static __inline int8x16_t __attribute__ ((__always_inline__)) +vuqaddq_s8 (int8x16_t __a, uint8x16_t __b) +{ + return (int8x16_t) __builtin_aarch64_suqaddv16qi (__a, (int8x16_t) __b); +} + +__extension__ static __inline int16x8_t __attribute__ ((__always_inline__)) +vuqaddq_s16 (int16x8_t __a, uint16x8_t __b) +{ + return (int16x8_t) __builtin_aarch64_suqaddv8hi (__a, (int16x8_t) __b); +} + +__extension__ static __inline int32x4_t __attribute__ ((__always_inline__)) +vuqaddq_s32 (int32x4_t __a, uint32x4_t __b) +{ + return (int32x4_t) __builtin_aarch64_suqaddv4si (__a, (int32x4_t) __b); +} + +__extension__ static __inline int64x2_t __attribute__ ((__always_inline__)) +vuqaddq_s64 (int64x2_t __a, uint64x2_t __b) +{ + return (int64x2_t) __builtin_aarch64_suqaddv2di (__a, (int64x2_t) __b); +} + +__extension__ static __inline int8_t __attribute__ ((__always_inline__)) +vuqaddb_s8 (int8_t __a, uint8_t __b) +{ + return (int8_t) __builtin_aarch64_suqaddqi (__a, (int8_t) __b); +} + +__extension__ static __inline int16_t __attribute__ ((__always_inline__)) +vuqaddh_s16 (int16_t __a, uint16_t __b) +{ + return (int16_t) __builtin_aarch64_suqaddhi (__a, (int16_t) __b); +} + +__extension__ static __inline int32_t __attribute__ ((__always_inline__)) +vuqadds_s32 (int32_t __a, uint32_t __b) +{ + return (int32_t) __builtin_aarch64_suqaddsi (__a, (int32_t) __b); +} + +__extension__ static __inline int64x1_t __attribute__ ((__always_inline__)) +vuqaddd_s64 (int64x1_t __a, uint64x1_t __b) +{ + return (int64x1_t) __builtin_aarch64_suqadddi (__a, (int64x1_t) __b); +} + +#define __DEFINTERLEAVE(op, rettype, intype, funcsuffix, Q) \ + __extension__ static __inline rettype \ + __attribute__ ((__always_inline__)) \ + v ## op ## Q ## _ ## funcsuffix (intype a, intype b) \ + { \ + return (rettype) {v ## op ## 1 ## Q ## _ ## funcsuffix (a, b), \ + v ## op ## 2 ## Q ## _ ## funcsuffix (a, b)}; \ + } + +#define __INTERLEAVE_LIST(op) \ + __DEFINTERLEAVE (op, float32x2x2_t, float32x2_t, f32,) \ + __DEFINTERLEAVE (op, poly8x8x2_t, poly8x8_t, p8,) \ + __DEFINTERLEAVE (op, poly16x4x2_t, poly16x4_t, p16,) \ + __DEFINTERLEAVE (op, int8x8x2_t, int8x8_t, s8,) \ + __DEFINTERLEAVE (op, int16x4x2_t, int16x4_t, s16,) \ + __DEFINTERLEAVE (op, int32x2x2_t, int32x2_t, s32,) \ + __DEFINTERLEAVE (op, uint8x8x2_t, uint8x8_t, u8,) \ + __DEFINTERLEAVE (op, uint16x4x2_t, uint16x4_t, u16,) \ + __DEFINTERLEAVE (op, uint32x2x2_t, uint32x2_t, u32,) \ + __DEFINTERLEAVE (op, float32x4x2_t, float32x4_t, f32, q) \ + __DEFINTERLEAVE (op, poly8x16x2_t, poly8x16_t, p8, q) \ + __DEFINTERLEAVE (op, poly16x8x2_t, poly16x8_t, p16, q) \ + __DEFINTERLEAVE (op, int8x16x2_t, int8x16_t, s8, q) \ + __DEFINTERLEAVE (op, int16x8x2_t, int16x8_t, s16, q) \ + __DEFINTERLEAVE (op, int32x4x2_t, int32x4_t, s32, q) \ + __DEFINTERLEAVE (op, uint8x16x2_t, uint8x16_t, u8, q) \ + __DEFINTERLEAVE (op, uint16x8x2_t, uint16x8_t, u16, q) \ + __DEFINTERLEAVE (op, uint32x4x2_t, uint32x4_t, u32, q) + +/* vuzp */ + +__INTERLEAVE_LIST (uzp) + +/* vzip */ + +__INTERLEAVE_LIST (zip) + +#undef __INTERLEAVE_LIST +#undef __DEFINTERLEAVE + +/* End of optimal implementations in approved order. */ + +#undef __aarch64_vget_lane_any +#undef __aarch64_vget_lane_f32 +#undef __aarch64_vget_lane_f64 +#undef __aarch64_vget_lane_p8 +#undef __aarch64_vget_lane_p16 +#undef __aarch64_vget_lane_s8 +#undef __aarch64_vget_lane_s16 +#undef __aarch64_vget_lane_s32 +#undef __aarch64_vget_lane_s64 +#undef __aarch64_vget_lane_u8 +#undef __aarch64_vget_lane_u16 +#undef __aarch64_vget_lane_u32 +#undef __aarch64_vget_lane_u64 + +#undef __aarch64_vgetq_lane_f32 +#undef __aarch64_vgetq_lane_f64 +#undef __aarch64_vgetq_lane_p8 +#undef __aarch64_vgetq_lane_p16 +#undef __aarch64_vgetq_lane_s8 +#undef __aarch64_vgetq_lane_s16 +#undef __aarch64_vgetq_lane_s32 +#undef __aarch64_vgetq_lane_s64 +#undef __aarch64_vgetq_lane_u8 +#undef __aarch64_vgetq_lane_u16 +#undef __aarch64_vgetq_lane_u32 +#undef __aarch64_vgetq_lane_u64 + +#undef __aarch64_vdup_lane_any +#undef __aarch64_vdup_lane_f32 +#undef __aarch64_vdup_lane_f64 +#undef __aarch64_vdup_lane_p8 +#undef __aarch64_vdup_lane_p16 +#undef __aarch64_vdup_lane_s8 +#undef __aarch64_vdup_lane_s16 +#undef __aarch64_vdup_lane_s32 +#undef __aarch64_vdup_lane_s64 +#undef __aarch64_vdup_lane_u8 +#undef __aarch64_vdup_lane_u16 +#undef __aarch64_vdup_lane_u32 +#undef __aarch64_vdup_lane_u64 +#undef __aarch64_vdup_laneq_f32 +#undef __aarch64_vdup_laneq_f64 +#undef __aarch64_vdup_laneq_p8 +#undef __aarch64_vdup_laneq_p16 +#undef __aarch64_vdup_laneq_s8 +#undef __aarch64_vdup_laneq_s16 +#undef __aarch64_vdup_laneq_s32 +#undef __aarch64_vdup_laneq_s64 +#undef __aarch64_vdup_laneq_u8 +#undef __aarch64_vdup_laneq_u16 +#undef __aarch64_vdup_laneq_u32 +#undef __aarch64_vdup_laneq_u64 +#undef __aarch64_vdupq_lane_f32 +#undef __aarch64_vdupq_lane_f64 +#undef __aarch64_vdupq_lane_p8 +#undef __aarch64_vdupq_lane_p16 +#undef __aarch64_vdupq_lane_s8 +#undef __aarch64_vdupq_lane_s16 +#undef __aarch64_vdupq_lane_s32 +#undef __aarch64_vdupq_lane_s64 +#undef __aarch64_vdupq_lane_u8 +#undef __aarch64_vdupq_lane_u16 +#undef __aarch64_vdupq_lane_u32 +#undef __aarch64_vdupq_lane_u64 +#undef __aarch64_vdupq_laneq_f32 +#undef __aarch64_vdupq_laneq_f64 +#undef __aarch64_vdupq_laneq_p8 +#undef __aarch64_vdupq_laneq_p16 +#undef __aarch64_vdupq_laneq_s8 +#undef __aarch64_vdupq_laneq_s16 +#undef __aarch64_vdupq_laneq_s32 +#undef __aarch64_vdupq_laneq_s64 +#undef __aarch64_vdupq_laneq_u8 +#undef __aarch64_vdupq_laneq_u16 +#undef __aarch64_vdupq_laneq_u32 +#undef __aarch64_vdupq_laneq_u64 + +#endif
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/float.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/float.h new file mode 100644 index 0000000..a8e05bf --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/float.h
@@ -0,0 +1,277 @@ +/* Copyright (C) 2002-2014 Free Software Foundation, Inc. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; either version 3, or (at your option) +any later version. + +GCC is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +Under Section 7 of GPL version 3, you are granted additional +permissions described in the GCC Runtime Library Exception, version +3.1, as published by the Free Software Foundation. + +You should have received a copy of the GNU General Public License and +a copy of the GCC Runtime Library Exception along with this program; +see the files COPYING3 and COPYING.RUNTIME respectively. If not, see +<http://www.gnu.org/licenses/>. */ + +/* + * ISO C Standard: 5.2.4.2.2 Characteristics of floating types <float.h> + */ + +#ifndef _FLOAT_H___ +#define _FLOAT_H___ + +/* Radix of exponent representation, b. */ +#undef FLT_RADIX +#define FLT_RADIX __FLT_RADIX__ + +/* Number of base-FLT_RADIX digits in the significand, p. */ +#undef FLT_MANT_DIG +#undef DBL_MANT_DIG +#undef LDBL_MANT_DIG +#define FLT_MANT_DIG __FLT_MANT_DIG__ +#define DBL_MANT_DIG __DBL_MANT_DIG__ +#define LDBL_MANT_DIG __LDBL_MANT_DIG__ + +/* Number of decimal digits, q, such that any floating-point number with q + decimal digits can be rounded into a floating-point number with p radix b + digits and back again without change to the q decimal digits, + + p * log10(b) if b is a power of 10 + floor((p - 1) * log10(b)) otherwise +*/ +#undef FLT_DIG +#undef DBL_DIG +#undef LDBL_DIG +#define FLT_DIG __FLT_DIG__ +#define DBL_DIG __DBL_DIG__ +#define LDBL_DIG __LDBL_DIG__ + +/* Minimum int x such that FLT_RADIX**(x-1) is a normalized float, emin */ +#undef FLT_MIN_EXP +#undef DBL_MIN_EXP +#undef LDBL_MIN_EXP +#define FLT_MIN_EXP __FLT_MIN_EXP__ +#define DBL_MIN_EXP __DBL_MIN_EXP__ +#define LDBL_MIN_EXP __LDBL_MIN_EXP__ + +/* Minimum negative integer such that 10 raised to that power is in the + range of normalized floating-point numbers, + + ceil(log10(b) * (emin - 1)) +*/ +#undef FLT_MIN_10_EXP +#undef DBL_MIN_10_EXP +#undef LDBL_MIN_10_EXP +#define FLT_MIN_10_EXP __FLT_MIN_10_EXP__ +#define DBL_MIN_10_EXP __DBL_MIN_10_EXP__ +#define LDBL_MIN_10_EXP __LDBL_MIN_10_EXP__ + +/* Maximum int x such that FLT_RADIX**(x-1) is a representable float, emax. */ +#undef FLT_MAX_EXP +#undef DBL_MAX_EXP +#undef LDBL_MAX_EXP +#define FLT_MAX_EXP __FLT_MAX_EXP__ +#define DBL_MAX_EXP __DBL_MAX_EXP__ +#define LDBL_MAX_EXP __LDBL_MAX_EXP__ + +/* Maximum integer such that 10 raised to that power is in the range of + representable finite floating-point numbers, + + floor(log10((1 - b**-p) * b**emax)) +*/ +#undef FLT_MAX_10_EXP +#undef DBL_MAX_10_EXP +#undef LDBL_MAX_10_EXP +#define FLT_MAX_10_EXP __FLT_MAX_10_EXP__ +#define DBL_MAX_10_EXP __DBL_MAX_10_EXP__ +#define LDBL_MAX_10_EXP __LDBL_MAX_10_EXP__ + +/* Maximum representable finite floating-point number, + + (1 - b**-p) * b**emax +*/ +#undef FLT_MAX +#undef DBL_MAX +#undef LDBL_MAX +#define FLT_MAX __FLT_MAX__ +#define DBL_MAX __DBL_MAX__ +#define LDBL_MAX __LDBL_MAX__ + +/* The difference between 1 and the least value greater than 1 that is + representable in the given floating point type, b**1-p. */ +#undef FLT_EPSILON +#undef DBL_EPSILON +#undef LDBL_EPSILON +#define FLT_EPSILON __FLT_EPSILON__ +#define DBL_EPSILON __DBL_EPSILON__ +#define LDBL_EPSILON __LDBL_EPSILON__ + +/* Minimum normalized positive floating-point number, b**(emin - 1). */ +#undef FLT_MIN +#undef DBL_MIN +#undef LDBL_MIN +#define FLT_MIN __FLT_MIN__ +#define DBL_MIN __DBL_MIN__ +#define LDBL_MIN __LDBL_MIN__ + +/* Addition rounds to 0: zero, 1: nearest, 2: +inf, 3: -inf, -1: unknown. */ +/* ??? This is supposed to change with calls to fesetround in <fenv.h>. */ +#undef FLT_ROUNDS +#define FLT_ROUNDS 1 + +#if defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L +/* The floating-point expression evaluation method. + -1 indeterminate + 0 evaluate all operations and constants just to the range and + precision of the type + 1 evaluate operations and constants of type float and double + to the range and precision of the double type, evaluate + long double operations and constants to the range and + precision of the long double type + 2 evaluate all operations and constants to the range and + precision of the long double type + + ??? This ought to change with the setting of the fp control word; + the value provided by the compiler assumes the widest setting. */ +#undef FLT_EVAL_METHOD +#define FLT_EVAL_METHOD __FLT_EVAL_METHOD__ + +/* Number of decimal digits, n, such that any floating-point number in the + widest supported floating type with pmax radix b digits can be rounded + to a floating-point number with n decimal digits and back again without + change to the value, + + pmax * log10(b) if b is a power of 10 + ceil(1 + pmax * log10(b)) otherwise +*/ +#undef DECIMAL_DIG +#define DECIMAL_DIG __DECIMAL_DIG__ + +#endif /* C99 */ + +#if defined (__STDC_VERSION__) && __STDC_VERSION__ >= 201112L +/* Versions of DECIMAL_DIG for each floating-point type. */ +#undef FLT_DECIMAL_DIG +#undef DBL_DECIMAL_DIG +#undef LDBL_DECIMAL_DIG +#define FLT_DECIMAL_DIG __FLT_DECIMAL_DIG__ +#define DBL_DECIMAL_DIG __DBL_DECIMAL_DIG__ +#define LDBL_DECIMAL_DIG __DECIMAL_DIG__ + +/* Whether types support subnormal numbers. */ +#undef FLT_HAS_SUBNORM +#undef DBL_HAS_SUBNORM +#undef LDBL_HAS_SUBNORM +#define FLT_HAS_SUBNORM __FLT_HAS_DENORM__ +#define DBL_HAS_SUBNORM __DBL_HAS_DENORM__ +#define LDBL_HAS_SUBNORM __LDBL_HAS_DENORM__ + +/* Minimum positive values, including subnormals. */ +#undef FLT_TRUE_MIN +#undef DBL_TRUE_MIN +#undef LDBL_TRUE_MIN +#if __FLT_HAS_DENORM__ +#define FLT_TRUE_MIN __FLT_DENORM_MIN__ +#else +#define FLT_TRUE_MIN __FLT_MIN__ +#endif +#if __DBL_HAS_DENORM__ +#define DBL_TRUE_MIN __DBL_DENORM_MIN__ +#else +#define DBL_TRUE_MIN __DBL_MIN__ +#endif +#if __LDBL_HAS_DENORM__ +#define LDBL_TRUE_MIN __LDBL_DENORM_MIN__ +#else +#define LDBL_TRUE_MIN __LDBL_MIN__ +#endif + +#endif /* C11 */ + +#ifdef __STDC_WANT_DEC_FP__ +/* Draft Technical Report 24732, extension for decimal floating-point + arithmetic: Characteristic of decimal floating types <float.h>. */ + +/* Number of base-FLT_RADIX digits in the significand, p. */ +#undef DEC32_MANT_DIG +#undef DEC64_MANT_DIG +#undef DEC128_MANT_DIG +#define DEC32_MANT_DIG __DEC32_MANT_DIG__ +#define DEC64_MANT_DIG __DEC64_MANT_DIG__ +#define DEC128_MANT_DIG __DEC128_MANT_DIG__ + +/* Minimum exponent. */ +#undef DEC32_MIN_EXP +#undef DEC64_MIN_EXP +#undef DEC128_MIN_EXP +#define DEC32_MIN_EXP __DEC32_MIN_EXP__ +#define DEC64_MIN_EXP __DEC64_MIN_EXP__ +#define DEC128_MIN_EXP __DEC128_MIN_EXP__ + +/* Maximum exponent. */ +#undef DEC32_MAX_EXP +#undef DEC64_MAX_EXP +#undef DEC128_MAX_EXP +#define DEC32_MAX_EXP __DEC32_MAX_EXP__ +#define DEC64_MAX_EXP __DEC64_MAX_EXP__ +#define DEC128_MAX_EXP __DEC128_MAX_EXP__ + +/* Maximum representable finite decimal floating-point number + (there are 6, 15, and 33 9s after the decimal points respectively). */ +#undef DEC32_MAX +#undef DEC64_MAX +#undef DEC128_MAX +#define DEC32_MAX __DEC32_MAX__ +#define DEC64_MAX __DEC64_MAX__ +#define DEC128_MAX __DEC128_MAX__ + +/* The difference between 1 and the least value greater than 1 that is + representable in the given floating point type. */ +#undef DEC32_EPSILON +#undef DEC64_EPSILON +#undef DEC128_EPSILON +#define DEC32_EPSILON __DEC32_EPSILON__ +#define DEC64_EPSILON __DEC64_EPSILON__ +#define DEC128_EPSILON __DEC128_EPSILON__ + +/* Minimum normalized positive floating-point number. */ +#undef DEC32_MIN +#undef DEC64_MIN +#undef DEC128_MIN +#define DEC32_MIN __DEC32_MIN__ +#define DEC64_MIN __DEC64_MIN__ +#define DEC128_MIN __DEC128_MIN__ + +/* Minimum subnormal positive floating-point number. */ +#undef DEC32_SUBNORMAL_MIN +#undef DEC64_SUBNORMAL_MIN +#undef DEC128_SUBNORMAL_MIN +#define DEC32_SUBNORMAL_MIN __DEC32_SUBNORMAL_MIN__ +#define DEC64_SUBNORMAL_MIN __DEC64_SUBNORMAL_MIN__ +#define DEC128_SUBNORMAL_MIN __DEC128_SUBNORMAL_MIN__ + +/* The floating-point expression evaluation method. + -1 indeterminate + 0 evaluate all operations and constants just to the range and + precision of the type + 1 evaluate operations and constants of type _Decimal32 + and _Decimal64 to the range and precision of the _Decimal64 + type, evaluate _Decimal128 operations and constants to the + range and precision of the _Decimal128 type; + 2 evaluate all operations and constants to the range and + precision of the _Decimal128 type. */ + +#undef DEC_EVAL_METHOD +#define DEC_EVAL_METHOD __DEC_EVAL_METHOD__ + +#endif /* __STDC_WANT_DEC_FP__ */ + +#endif /* _FLOAT_H___ */
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/iso646.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/iso646.h new file mode 100644 index 0000000..89bc8f4 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/iso646.h
@@ -0,0 +1,45 @@ +/* Copyright (C) 1997-2014 Free Software Foundation, Inc. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; either version 3, or (at your option) +any later version. + +GCC is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +Under Section 7 of GPL version 3, you are granted additional +permissions described in the GCC Runtime Library Exception, version +3.1, as published by the Free Software Foundation. + +You should have received a copy of the GNU General Public License and +a copy of the GCC Runtime Library Exception along with this program; +see the files COPYING3 and COPYING.RUNTIME respectively. If not, see +<http://www.gnu.org/licenses/>. */ + +/* + * ISO C Standard: 7.9 Alternative spellings <iso646.h> + */ + +#ifndef _ISO646_H +#define _ISO646_H + +#ifndef __cplusplus +#define and && +#define and_eq &= +#define bitand & +#define bitor | +#define compl ~ +#define not ! +#define not_eq != +#define or || +#define or_eq |= +#define xor ^ +#define xor_eq ^= +#endif + +#endif
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/omp.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/omp.h new file mode 100644 index 0000000..9bcceb5 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/omp.h
@@ -0,0 +1,127 @@ +/* Copyright (C) 2005-2014 Free Software Foundation, Inc. + Contributed by Richard Henderson <rth@redhat.com>. + + This file is part of the GNU OpenMP Library (libgomp). + + Libgomp is free software; you can redistribute it and/or modify it + under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 3, or (at your option) + any later version. + + Libgomp is distributed in the hope that it will be useful, but WITHOUT ANY + WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS + FOR A PARTICULAR PURPOSE. See the GNU General Public License for + more details. + + Under Section 7 of GPL version 3, you are granted additional + permissions described in the GCC Runtime Library Exception, version + 3.1, as published by the Free Software Foundation. + + You should have received a copy of the GNU General Public License and + a copy of the GCC Runtime Library Exception along with this program; + see the files COPYING3 and COPYING.RUNTIME respectively. If not, see + <http://www.gnu.org/licenses/>. */ + +#ifndef _OMP_H +#define _OMP_H 1 + +#ifndef _LIBGOMP_OMP_LOCK_DEFINED +#define _LIBGOMP_OMP_LOCK_DEFINED 1 +/* These two structures get edited by the libgomp build process to + reflect the shape of the two types. Their internals are private + to the library. */ + +typedef struct +{ + unsigned char _x[4] + __attribute__((__aligned__(4))); +} omp_lock_t; + +typedef struct +{ + unsigned char _x[16] + __attribute__((__aligned__(8))); +} omp_nest_lock_t; +#endif + +typedef enum omp_sched_t +{ + omp_sched_static = 1, + omp_sched_dynamic = 2, + omp_sched_guided = 3, + omp_sched_auto = 4 +} omp_sched_t; + +typedef enum omp_proc_bind_t +{ + omp_proc_bind_false = 0, + omp_proc_bind_true = 1, + omp_proc_bind_master = 2, + omp_proc_bind_close = 3, + omp_proc_bind_spread = 4 +} omp_proc_bind_t; + +#ifdef __cplusplus +extern "C" { +# define __GOMP_NOTHROW throw () +#else +# define __GOMP_NOTHROW __attribute__((__nothrow__)) +#endif + +extern void omp_set_num_threads (int) __GOMP_NOTHROW; +extern int omp_get_num_threads (void) __GOMP_NOTHROW; +extern int omp_get_max_threads (void) __GOMP_NOTHROW; +extern int omp_get_thread_num (void) __GOMP_NOTHROW; +extern int omp_get_num_procs (void) __GOMP_NOTHROW; + +extern int omp_in_parallel (void) __GOMP_NOTHROW; + +extern void omp_set_dynamic (int) __GOMP_NOTHROW; +extern int omp_get_dynamic (void) __GOMP_NOTHROW; + +extern void omp_set_nested (int) __GOMP_NOTHROW; +extern int omp_get_nested (void) __GOMP_NOTHROW; + +extern void omp_init_lock (omp_lock_t *) __GOMP_NOTHROW; +extern void omp_destroy_lock (omp_lock_t *) __GOMP_NOTHROW; +extern void omp_set_lock (omp_lock_t *) __GOMP_NOTHROW; +extern void omp_unset_lock (omp_lock_t *) __GOMP_NOTHROW; +extern int omp_test_lock (omp_lock_t *) __GOMP_NOTHROW; + +extern void omp_init_nest_lock (omp_nest_lock_t *) __GOMP_NOTHROW; +extern void omp_destroy_nest_lock (omp_nest_lock_t *) __GOMP_NOTHROW; +extern void omp_set_nest_lock (omp_nest_lock_t *) __GOMP_NOTHROW; +extern void omp_unset_nest_lock (omp_nest_lock_t *) __GOMP_NOTHROW; +extern int omp_test_nest_lock (omp_nest_lock_t *) __GOMP_NOTHROW; + +extern double omp_get_wtime (void) __GOMP_NOTHROW; +extern double omp_get_wtick (void) __GOMP_NOTHROW; + +extern void omp_set_schedule (omp_sched_t, int) __GOMP_NOTHROW; +extern void omp_get_schedule (omp_sched_t *, int *) __GOMP_NOTHROW; +extern int omp_get_thread_limit (void) __GOMP_NOTHROW; +extern void omp_set_max_active_levels (int) __GOMP_NOTHROW; +extern int omp_get_max_active_levels (void) __GOMP_NOTHROW; +extern int omp_get_level (void) __GOMP_NOTHROW; +extern int omp_get_ancestor_thread_num (int) __GOMP_NOTHROW; +extern int omp_get_team_size (int) __GOMP_NOTHROW; +extern int omp_get_active_level (void) __GOMP_NOTHROW; + +extern int omp_in_final (void) __GOMP_NOTHROW; + +extern int omp_get_cancellation (void) __GOMP_NOTHROW; +extern omp_proc_bind_t omp_get_proc_bind (void) __GOMP_NOTHROW; + +extern void omp_set_default_device (int) __GOMP_NOTHROW; +extern int omp_get_default_device (void) __GOMP_NOTHROW; +extern int omp_get_num_devices (void) __GOMP_NOTHROW; +extern int omp_get_num_teams (void) __GOMP_NOTHROW; +extern int omp_get_team_num (void) __GOMP_NOTHROW; + +extern int omp_is_initial_device (void) __GOMP_NOTHROW; + +#ifdef __cplusplus +} +#endif + +#endif /* _OMP_H */
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdalign.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdalign.h new file mode 100644 index 0000000..ee2d81f --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdalign.h
@@ -0,0 +1,39 @@ +/* Copyright (C) 2011-2014 Free Software Foundation, Inc. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; either version 3, or (at your option) +any later version. + +GCC is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +Under Section 7 of GPL version 3, you are granted additional +permissions described in the GCC Runtime Library Exception, version +3.1, as published by the Free Software Foundation. + +You should have received a copy of the GNU General Public License and +a copy of the GCC Runtime Library Exception along with this program; +see the files COPYING3 and COPYING.RUNTIME respectively. If not, see +<http://www.gnu.org/licenses/>. */ + +/* ISO C1X: 7.15 Alignment <stdalign.h>. */ + +#ifndef _STDALIGN_H +#define _STDALIGN_H + +#ifndef __cplusplus + +#define alignas _Alignas +#define alignof _Alignof + +#define __alignas_is_defined 1 +#define __alignof_is_defined 1 + +#endif + +#endif /* stdalign.h */
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdarg.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdarg.h new file mode 100644 index 0000000..1d4418b --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdarg.h
@@ -0,0 +1,126 @@ +/* Copyright (C) 1989-2014 Free Software Foundation, Inc. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; either version 3, or (at your option) +any later version. + +GCC is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +Under Section 7 of GPL version 3, you are granted additional +permissions described in the GCC Runtime Library Exception, version +3.1, as published by the Free Software Foundation. + +You should have received a copy of the GNU General Public License and +a copy of the GCC Runtime Library Exception along with this program; +see the files COPYING3 and COPYING.RUNTIME respectively. If not, see +<http://www.gnu.org/licenses/>. */ + +/* + * ISO C Standard: 7.15 Variable arguments <stdarg.h> + */ + +#ifndef _STDARG_H +#ifndef _ANSI_STDARG_H_ +#ifndef __need___va_list +#define _STDARG_H +#define _ANSI_STDARG_H_ +#endif /* not __need___va_list */ +#undef __need___va_list + +/* Define __gnuc_va_list. */ + +#ifndef __GNUC_VA_LIST +#define __GNUC_VA_LIST +typedef __builtin_va_list __gnuc_va_list; +#endif + +/* Define the standard macros for the user, + if this invocation was from the user program. */ +#ifdef _STDARG_H + +#define va_start(v,l) __builtin_va_start(v,l) +#define va_end(v) __builtin_va_end(v) +#define va_arg(v,l) __builtin_va_arg(v,l) +#if !defined(__STRICT_ANSI__) || __STDC_VERSION__ + 0 >= 199900L || defined(__GXX_EXPERIMENTAL_CXX0X__) +#define va_copy(d,s) __builtin_va_copy(d,s) +#endif +#define __va_copy(d,s) __builtin_va_copy(d,s) + +/* Define va_list, if desired, from __gnuc_va_list. */ +/* We deliberately do not define va_list when called from + stdio.h, because ANSI C says that stdio.h is not supposed to define + va_list. stdio.h needs to have access to that data type, + but must not use that name. It should use the name __gnuc_va_list, + which is safe because it is reserved for the implementation. */ + +#ifdef _BSD_VA_LIST +#undef _BSD_VA_LIST +#endif + +#if defined(__svr4__) || (defined(_SCO_DS) && !defined(__VA_LIST)) +/* SVR4.2 uses _VA_LIST for an internal alias for va_list, + so we must avoid testing it and setting it here. + SVR4 uses _VA_LIST as a flag in stdarg.h, but we should + have no conflict with that. */ +#ifndef _VA_LIST_ +#define _VA_LIST_ +#ifdef __i860__ +#ifndef _VA_LIST +#define _VA_LIST va_list +#endif +#endif /* __i860__ */ +typedef __gnuc_va_list va_list; +#ifdef _SCO_DS +#define __VA_LIST +#endif +#endif /* _VA_LIST_ */ +#else /* not __svr4__ || _SCO_DS */ + +/* The macro _VA_LIST_ is the same thing used by this file in Ultrix. + But on BSD NET2 we must not test or define or undef it. + (Note that the comments in NET 2's ansi.h + are incorrect for _VA_LIST_--see stdio.h!) */ +#if !defined (_VA_LIST_) || defined (__BSD_NET2__) || defined (____386BSD____) || defined (__bsdi__) || defined (__sequent__) || defined (__FreeBSD__) || defined(WINNT) +/* The macro _VA_LIST_DEFINED is used in Windows NT 3.5 */ +#ifndef _VA_LIST_DEFINED +/* The macro _VA_LIST is used in SCO Unix 3.2. */ +#ifndef _VA_LIST +/* The macro _VA_LIST_T_H is used in the Bull dpx2 */ +#ifndef _VA_LIST_T_H +/* The macro __va_list__ is used by BeOS. */ +#ifndef __va_list__ +typedef __gnuc_va_list va_list; +#endif /* not __va_list__ */ +#endif /* not _VA_LIST_T_H */ +#endif /* not _VA_LIST */ +#endif /* not _VA_LIST_DEFINED */ +#if !(defined (__BSD_NET2__) || defined (____386BSD____) || defined (__bsdi__) || defined (__sequent__) || defined (__FreeBSD__)) +#define _VA_LIST_ +#endif +#ifndef _VA_LIST +#define _VA_LIST +#endif +#ifndef _VA_LIST_DEFINED +#define _VA_LIST_DEFINED +#endif +#ifndef _VA_LIST_T_H +#define _VA_LIST_T_H +#endif +#ifndef __va_list__ +#define __va_list__ +#endif + +#endif /* not _VA_LIST_, except on certain systems */ + +#endif /* not __svr4__ */ + +#endif /* _STDARG_H */ + +#endif /* not _ANSI_STDARG_H_ */ +#endif /* not _STDARG_H */
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdatomic.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdatomic.h new file mode 100644 index 0000000..108259b --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdatomic.h
@@ -0,0 +1,252 @@ +/* Copyright (C) 2013-2014 Free Software Foundation, Inc. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; either version 3, or (at your option) +any later version. + +GCC is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +Under Section 7 of GPL version 3, you are granted additional +permissions described in the GCC Runtime Library Exception, version +3.1, as published by the Free Software Foundation. + +You should have received a copy of the GNU General Public License and +a copy of the GCC Runtime Library Exception along with this program; +see the files COPYING3 and COPYING.RUNTIME respectively. If not, see +<http://www.gnu.org/licenses/>. */ + +/* ISO C11 Standard: 7.17 Atomics <stdatomic.h>. */ + +#ifndef _STDATOMIC_H +#define _STDATOMIC_H + +typedef enum + { + memory_order_relaxed = __ATOMIC_RELAXED, + memory_order_consume = __ATOMIC_CONSUME, + memory_order_acquire = __ATOMIC_ACQUIRE, + memory_order_release = __ATOMIC_RELEASE, + memory_order_acq_rel = __ATOMIC_ACQ_REL, + memory_order_seq_cst = __ATOMIC_SEQ_CST + } memory_order; + + +typedef _Atomic _Bool atomic_bool; +typedef _Atomic char atomic_char; +typedef _Atomic signed char atomic_schar; +typedef _Atomic unsigned char atomic_uchar; +typedef _Atomic short atomic_short; +typedef _Atomic unsigned short atomic_ushort; +typedef _Atomic int atomic_int; +typedef _Atomic unsigned int atomic_uint; +typedef _Atomic long atomic_long; +typedef _Atomic unsigned long atomic_ulong; +typedef _Atomic long long atomic_llong; +typedef _Atomic unsigned long long atomic_ullong; +typedef _Atomic __CHAR16_TYPE__ atomic_char16_t; +typedef _Atomic __CHAR32_TYPE__ atomic_char32_t; +typedef _Atomic __WCHAR_TYPE__ atomic_wchar_t; +typedef _Atomic __INT_LEAST8_TYPE__ atomic_int_least8_t; +typedef _Atomic __UINT_LEAST8_TYPE__ atomic_uint_least8_t; +typedef _Atomic __INT_LEAST16_TYPE__ atomic_int_least16_t; +typedef _Atomic __UINT_LEAST16_TYPE__ atomic_uint_least16_t; +typedef _Atomic __INT_LEAST32_TYPE__ atomic_int_least32_t; +typedef _Atomic __UINT_LEAST32_TYPE__ atomic_uint_least32_t; +typedef _Atomic __INT_LEAST64_TYPE__ atomic_int_least64_t; +typedef _Atomic __UINT_LEAST64_TYPE__ atomic_uint_least64_t; +typedef _Atomic __INT_FAST8_TYPE__ atomic_int_fast8_t; +typedef _Atomic __UINT_FAST8_TYPE__ atomic_uint_fast8_t; +typedef _Atomic __INT_FAST16_TYPE__ atomic_int_fast16_t; +typedef _Atomic __UINT_FAST16_TYPE__ atomic_uint_fast16_t; +typedef _Atomic __INT_FAST32_TYPE__ atomic_int_fast32_t; +typedef _Atomic __UINT_FAST32_TYPE__ atomic_uint_fast32_t; +typedef _Atomic __INT_FAST64_TYPE__ atomic_int_fast64_t; +typedef _Atomic __UINT_FAST64_TYPE__ atomic_uint_fast64_t; +typedef _Atomic __INTPTR_TYPE__ atomic_intptr_t; +typedef _Atomic __UINTPTR_TYPE__ atomic_uintptr_t; +typedef _Atomic __SIZE_TYPE__ atomic_size_t; +typedef _Atomic __PTRDIFF_TYPE__ atomic_ptrdiff_t; +typedef _Atomic __INTMAX_TYPE__ atomic_intmax_t; +typedef _Atomic __UINTMAX_TYPE__ atomic_uintmax_t; + + +#define ATOMIC_VAR_INIT(VALUE) (VALUE) +#define atomic_init(PTR, VAL) \ + do \ + { \ + *(PTR) = (VAL); \ + } \ + while (0) + +#define kill_dependency(Y) \ + __extension__ \ + ({ \ + __auto_type __kill_dependency_tmp = (Y); \ + __kill_dependency_tmp; \ + }) + +#define atomic_thread_fence(MO) __atomic_thread_fence (MO) +#define atomic_signal_fence(MO) __atomic_signal_fence (MO) +#define atomic_is_lock_free(OBJ) __atomic_is_lock_free (sizeof (*(OBJ)), (OBJ)) + +#define __atomic_type_lock_free(T) \ + (__atomic_always_lock_free (sizeof (T), (void *) 0) \ + ? 2 \ + : (__atomic_is_lock_free (sizeof (T), (void *) 0) ? 1 : 0)) +#define ATOMIC_BOOL_LOCK_FREE \ + __atomic_type_lock_free (atomic_bool) +#define ATOMIC_CHAR_LOCK_FREE \ + __atomic_type_lock_free (atomic_char) +#define ATOMIC_CHAR16_T_LOCK_FREE \ + __atomic_type_lock_free (atomic_char16_t) +#define ATOMIC_CHAR32_T_LOCK_FREE \ + __atomic_type_lock_free (atomic_char32_t) +#define ATOMIC_WCHAR_T_LOCK_FREE \ + __atomic_type_lock_free (atomic_wchar_t) +#define ATOMIC_SHORT_LOCK_FREE \ + __atomic_type_lock_free (atomic_short) +#define ATOMIC_INT_LOCK_FREE \ + __atomic_type_lock_free (atomic_int) +#define ATOMIC_LONG_LOCK_FREE \ + __atomic_type_lock_free (atomic_long) +#define ATOMIC_LLONG_LOCK_FREE \ + __atomic_type_lock_free (atomic_llong) +#define ATOMIC_POINTER_LOCK_FREE \ + __atomic_type_lock_free (void * _Atomic) + + +/* Note that these macros require __typeof__ and __auto_type to remove + _Atomic qualifiers (and const qualifiers, if those are valid on + macro operands). + + Also note that the header file uses the generic form of __atomic + builtins, which requires the address to be taken of the value + parameter, and then we pass that value on. This allows the macros + to work for any type, and the compiler is smart enough to convert + these to lock-free _N variants if possible, and throw away the + temps. */ + +#define atomic_store_explicit(PTR, VAL, MO) \ + __extension__ \ + ({ \ + __auto_type __atomic_store_ptr = (PTR); \ + __typeof__ (*__atomic_store_ptr) __atomic_store_tmp = (VAL); \ + __atomic_store (__atomic_store_ptr, &__atomic_store_tmp, (MO)); \ + }) + +#define atomic_store(PTR, VAL) \ + atomic_store_explicit (PTR, VAL, __ATOMIC_SEQ_CST) + + +#define atomic_load_explicit(PTR, MO) \ + __extension__ \ + ({ \ + __auto_type __atomic_load_ptr = (PTR); \ + __typeof__ (*__atomic_load_ptr) __atomic_load_tmp; \ + __atomic_load (__atomic_load_ptr, &__atomic_load_tmp, (MO)); \ + __atomic_load_tmp; \ + }) + +#define atomic_load(PTR) atomic_load_explicit (PTR, __ATOMIC_SEQ_CST) + + +#define atomic_exchange_explicit(PTR, VAL, MO) \ + __extension__ \ + ({ \ + __auto_type __atomic_exchange_ptr = (PTR); \ + __typeof__ (*__atomic_exchange_ptr) __atomic_exchange_val = (VAL); \ + __typeof__ (*__atomic_exchange_ptr) __atomic_exchange_tmp; \ + __atomic_exchange (__atomic_exchange_ptr, &__atomic_exchange_val, \ + &__atomic_exchange_tmp, (MO)); \ + __atomic_exchange_tmp; \ + }) + +#define atomic_exchange(PTR, VAL) \ + atomic_exchange_explicit (PTR, VAL, __ATOMIC_SEQ_CST) + + +#define atomic_compare_exchange_strong_explicit(PTR, VAL, DES, SUC, FAIL) \ + __extension__ \ + ({ \ + __auto_type __atomic_compare_exchange_ptr = (PTR); \ + __typeof__ (*__atomic_compare_exchange_ptr) __atomic_compare_exchange_tmp \ + = (DES); \ + __atomic_compare_exchange (__atomic_compare_exchange_ptr, (VAL), \ + &__atomic_compare_exchange_tmp, 0, \ + (SUC), (FAIL)); \ + }) + +#define atomic_compare_exchange_strong(PTR, VAL, DES) \ + atomic_compare_exchange_strong_explicit (PTR, VAL, DES, __ATOMIC_SEQ_CST, \ + __ATOMIC_SEQ_CST) + +#define atomic_compare_exchange_weak_explicit(PTR, VAL, DES, SUC, FAIL) \ + __extension__ \ + ({ \ + __auto_type __atomic_compare_exchange_ptr = (PTR); \ + __typeof__ (*__atomic_compare_exchange_ptr) __atomic_compare_exchange_tmp \ + = (DES); \ + __atomic_compare_exchange (__atomic_compare_exchange_ptr, (VAL), \ + &__atomic_compare_exchange_tmp, 1, \ + (SUC), (FAIL)); \ + }) + +#define atomic_compare_exchange_weak(PTR, VAL, DES) \ + atomic_compare_exchange_weak_explicit (PTR, VAL, DES, __ATOMIC_SEQ_CST, \ + __ATOMIC_SEQ_CST) + + + +#define atomic_fetch_add(PTR, VAL) __atomic_fetch_add ((PTR), (VAL), \ + __ATOMIC_SEQ_CST) +#define atomic_fetch_add_explicit(PTR, VAL, MO) \ + __atomic_fetch_add ((PTR), (VAL), (MO)) + +#define atomic_fetch_sub(PTR, VAL) __atomic_fetch_sub ((PTR), (VAL), \ + __ATOMIC_SEQ_CST) +#define atomic_fetch_sub_explicit(PTR, VAL, MO) \ + __atomic_fetch_sub ((PTR), (VAL), (MO)) + +#define atomic_fetch_or(PTR, VAL) __atomic_fetch_or ((PTR), (VAL), \ + __ATOMIC_SEQ_CST) +#define atomic_fetch_or_explicit(PTR, VAL, MO) \ + __atomic_fetch_or ((PTR), (VAL), (MO)) + +#define atomic_fetch_xor(PTR, VAL) __atomic_fetch_xor ((PTR), (VAL), \ + __ATOMIC_SEQ_CST) +#define atomic_fetch_xor_explicit(PTR, VAL, MO) \ + __atomic_fetch_xor ((PTR), (VAL), (MO)) + +#define atomic_fetch_and(PTR, VAL) __atomic_fetch_and ((PTR), (VAL), \ + __ATOMIC_SEQ_CST) +#define atomic_fetch_and_explicit(PTR, VAL, MO) \ + __atomic_fetch_and ((PTR), (VAL), (MO)) + + +typedef _Atomic struct +{ +#if __GCC_ATOMIC_TEST_AND_SET_TRUEVAL == 1 + _Bool __val; +#else + unsigned char __val; +#endif +} atomic_flag; + +#define ATOMIC_FLAG_INIT { 0 } + + +#define atomic_flag_test_and_set(PTR) \ + __atomic_test_and_set ((PTR), __ATOMIC_SEQ_CST) +#define atomic_flag_test_and_set_explicit(PTR, MO) \ + __atomic_test_and_set ((PTR), (MO)) + +#define atomic_flag_clear(PTR) __atomic_clear ((PTR), __ATOMIC_SEQ_CST) +#define atomic_flag_clear_explicit(PTR, MO) __atomic_clear ((PTR), (MO)) + +#endif /* _STDATOMIC_H */
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdbool.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdbool.h new file mode 100644 index 0000000..f4e802f --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdbool.h
@@ -0,0 +1,50 @@ +/* Copyright (C) 1998-2014 Free Software Foundation, Inc. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; either version 3, or (at your option) +any later version. + +GCC is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +Under Section 7 of GPL version 3, you are granted additional +permissions described in the GCC Runtime Library Exception, version +3.1, as published by the Free Software Foundation. + +You should have received a copy of the GNU General Public License and +a copy of the GCC Runtime Library Exception along with this program; +see the files COPYING3 and COPYING.RUNTIME respectively. If not, see +<http://www.gnu.org/licenses/>. */ + +/* + * ISO C Standard: 7.16 Boolean type and values <stdbool.h> + */ + +#ifndef _STDBOOL_H +#define _STDBOOL_H + +#ifndef __cplusplus + +#define bool _Bool +#define true 1 +#define false 0 + +#else /* __cplusplus */ + +/* Supporting <stdbool.h> in C++ is a GCC extension. */ +#define _Bool bool +#define bool bool +#define false false +#define true true + +#endif /* __cplusplus */ + +/* Signal that all the definitions are present. */ +#define __bool_true_false_are_defined 1 + +#endif /* stdbool.h */
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stddef.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stddef.h new file mode 100644 index 0000000..cfa8df3 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stddef.h
@@ -0,0 +1,439 @@ +/* Copyright (C) 1989-2014 Free Software Foundation, Inc. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; either version 3, or (at your option) +any later version. + +GCC is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +Under Section 7 of GPL version 3, you are granted additional +permissions described in the GCC Runtime Library Exception, version +3.1, as published by the Free Software Foundation. + +You should have received a copy of the GNU General Public License and +a copy of the GCC Runtime Library Exception along with this program; +see the files COPYING3 and COPYING.RUNTIME respectively. If not, see +<http://www.gnu.org/licenses/>. */ + +/* + * ISO C Standard: 7.17 Common definitions <stddef.h> + */ +#if (!defined(_STDDEF_H) && !defined(_STDDEF_H_) && !defined(_ANSI_STDDEF_H) \ + && !defined(__STDDEF_H__)) \ + || defined(__need_wchar_t) || defined(__need_size_t) \ + || defined(__need_ptrdiff_t) || defined(__need_NULL) \ + || defined(__need_wint_t) + +/* Any one of these symbols __need_* means that GNU libc + wants us just to define one data type. So don't define + the symbols that indicate this file's entire job has been done. */ +#if (!defined(__need_wchar_t) && !defined(__need_size_t) \ + && !defined(__need_ptrdiff_t) && !defined(__need_NULL) \ + && !defined(__need_wint_t)) +#define _STDDEF_H +#define _STDDEF_H_ +/* snaroff@next.com says the NeXT needs this. */ +#define _ANSI_STDDEF_H +#endif + +#ifndef __sys_stdtypes_h +/* This avoids lossage on SunOS but only if stdtypes.h comes first. + There's no way to win with the other order! Sun lossage. */ + +/* On 4.3bsd-net2, make sure ansi.h is included, so we have + one less case to deal with in the following. */ +#if defined (__BSD_NET2__) || defined (____386BSD____) || (defined (__FreeBSD__) && (__FreeBSD__ < 5)) || defined(__NetBSD__) +#include <machine/ansi.h> +#endif +/* On FreeBSD 5, machine/ansi.h does not exist anymore... */ +#if defined (__FreeBSD__) && (__FreeBSD__ >= 5) +#include <sys/_types.h> +#endif + +/* In 4.3bsd-net2, machine/ansi.h defines these symbols, which are + defined if the corresponding type is *not* defined. + FreeBSD-2.1 defines _MACHINE_ANSI_H_ instead of _ANSI_H_. + NetBSD defines _I386_ANSI_H_ and _X86_64_ANSI_H_ instead of _ANSI_H_ */ +#if defined(_ANSI_H_) || defined(_MACHINE_ANSI_H_) || defined(_X86_64_ANSI_H_) || defined(_I386_ANSI_H_) +#if !defined(_SIZE_T_) && !defined(_BSD_SIZE_T_) +#define _SIZE_T +#endif +#if !defined(_PTRDIFF_T_) && !defined(_BSD_PTRDIFF_T_) +#define _PTRDIFF_T +#endif +/* On BSD/386 1.1, at least, machine/ansi.h defines _BSD_WCHAR_T_ + instead of _WCHAR_T_. */ +#if !defined(_WCHAR_T_) && !defined(_BSD_WCHAR_T_) +#ifndef _BSD_WCHAR_T_ +#define _WCHAR_T +#endif +#endif +/* Undef _FOO_T_ if we are supposed to define foo_t. */ +#if defined (__need_ptrdiff_t) || defined (_STDDEF_H_) +#undef _PTRDIFF_T_ +#undef _BSD_PTRDIFF_T_ +#endif +#if defined (__need_size_t) || defined (_STDDEF_H_) +#undef _SIZE_T_ +#undef _BSD_SIZE_T_ +#endif +#if defined (__need_wchar_t) || defined (_STDDEF_H_) +#undef _WCHAR_T_ +#undef _BSD_WCHAR_T_ +#endif +#endif /* defined(_ANSI_H_) || defined(_MACHINE_ANSI_H_) || defined(_X86_64_ANSI_H_) || defined(_I386_ANSI_H_) */ + +/* Sequent's header files use _PTRDIFF_T_ in some conflicting way. + Just ignore it. */ +#if defined (__sequent__) && defined (_PTRDIFF_T_) +#undef _PTRDIFF_T_ +#endif + +/* On VxWorks, <type/vxTypesBase.h> may have defined macros like + _TYPE_size_t which will typedef size_t. fixincludes patched the + vxTypesBase.h so that this macro is only defined if _GCC_SIZE_T is + not defined, and so that defining this macro defines _GCC_SIZE_T. + If we find that the macros are still defined at this point, we must + invoke them so that the type is defined as expected. */ +#if defined (_TYPE_ptrdiff_t) && (defined (__need_ptrdiff_t) || defined (_STDDEF_H_)) +_TYPE_ptrdiff_t; +#undef _TYPE_ptrdiff_t +#endif +#if defined (_TYPE_size_t) && (defined (__need_size_t) || defined (_STDDEF_H_)) +_TYPE_size_t; +#undef _TYPE_size_t +#endif +#if defined (_TYPE_wchar_t) && (defined (__need_wchar_t) || defined (_STDDEF_H_)) +_TYPE_wchar_t; +#undef _TYPE_wchar_t +#endif + +/* In case nobody has defined these types, but we aren't running under + GCC 2.00, make sure that __PTRDIFF_TYPE__, __SIZE_TYPE__, and + __WCHAR_TYPE__ have reasonable values. This can happen if the + parts of GCC is compiled by an older compiler, that actually + include gstddef.h, such as collect2. */ + +/* Signed type of difference of two pointers. */ + +/* Define this type if we are doing the whole job, + or if we want this type in particular. */ +#if defined (_STDDEF_H) || defined (__need_ptrdiff_t) +#ifndef _PTRDIFF_T /* in case <sys/types.h> has defined it. */ +#ifndef _T_PTRDIFF_ +#ifndef _T_PTRDIFF +#ifndef __PTRDIFF_T +#ifndef _PTRDIFF_T_ +#ifndef _BSD_PTRDIFF_T_ +#ifndef ___int_ptrdiff_t_h +#ifndef _GCC_PTRDIFF_T +#define _PTRDIFF_T +#define _T_PTRDIFF_ +#define _T_PTRDIFF +#define __PTRDIFF_T +#define _PTRDIFF_T_ +#define _BSD_PTRDIFF_T_ +#define ___int_ptrdiff_t_h +#define _GCC_PTRDIFF_T +#ifndef __PTRDIFF_TYPE__ +#define __PTRDIFF_TYPE__ long int +#endif +typedef __PTRDIFF_TYPE__ ptrdiff_t; +#endif /* _GCC_PTRDIFF_T */ +#endif /* ___int_ptrdiff_t_h */ +#endif /* _BSD_PTRDIFF_T_ */ +#endif /* _PTRDIFF_T_ */ +#endif /* __PTRDIFF_T */ +#endif /* _T_PTRDIFF */ +#endif /* _T_PTRDIFF_ */ +#endif /* _PTRDIFF_T */ + +/* If this symbol has done its job, get rid of it. */ +#undef __need_ptrdiff_t + +#endif /* _STDDEF_H or __need_ptrdiff_t. */ + +/* Unsigned type of `sizeof' something. */ + +/* Define this type if we are doing the whole job, + or if we want this type in particular. */ +#if defined (_STDDEF_H) || defined (__need_size_t) +#ifndef __size_t__ /* BeOS */ +#ifndef __SIZE_T__ /* Cray Unicos/Mk */ +#ifndef _SIZE_T /* in case <sys/types.h> has defined it. */ +#ifndef _SYS_SIZE_T_H +#ifndef _T_SIZE_ +#ifndef _T_SIZE +#ifndef __SIZE_T +#ifndef _SIZE_T_ +#ifndef _BSD_SIZE_T_ +#ifndef _SIZE_T_DEFINED_ +#ifndef _SIZE_T_DEFINED +#ifndef _BSD_SIZE_T_DEFINED_ /* Darwin */ +#ifndef _SIZE_T_DECLARED /* FreeBSD 5 */ +#ifndef ___int_size_t_h +#ifndef _GCC_SIZE_T +#ifndef _SIZET_ +#ifndef __size_t +#define __size_t__ /* BeOS */ +#define __SIZE_T__ /* Cray Unicos/Mk */ +#define _SIZE_T +#define _SYS_SIZE_T_H +#define _T_SIZE_ +#define _T_SIZE +#define __SIZE_T +#define _SIZE_T_ +#define _BSD_SIZE_T_ +#define _SIZE_T_DEFINED_ +#define _SIZE_T_DEFINED +#define _BSD_SIZE_T_DEFINED_ /* Darwin */ +#define _SIZE_T_DECLARED /* FreeBSD 5 */ +#define ___int_size_t_h +#define _GCC_SIZE_T +#define _SIZET_ +#if (defined (__FreeBSD__) && (__FreeBSD__ >= 5)) \ + || defined(__FreeBSD_kernel__) +/* __size_t is a typedef on FreeBSD 5, must not trash it. */ +#elif defined (__VMS__) +/* __size_t is also a typedef on VMS. */ +#else +#define __size_t +#endif +#ifndef __SIZE_TYPE__ +#define __SIZE_TYPE__ long unsigned int +#endif +#if !(defined (__GNUG__) && defined (size_t)) +typedef __SIZE_TYPE__ size_t; +#ifdef __BEOS__ +typedef long ssize_t; +#endif /* __BEOS__ */ +#endif /* !(defined (__GNUG__) && defined (size_t)) */ +#endif /* __size_t */ +#endif /* _SIZET_ */ +#endif /* _GCC_SIZE_T */ +#endif /* ___int_size_t_h */ +#endif /* _SIZE_T_DECLARED */ +#endif /* _BSD_SIZE_T_DEFINED_ */ +#endif /* _SIZE_T_DEFINED */ +#endif /* _SIZE_T_DEFINED_ */ +#endif /* _BSD_SIZE_T_ */ +#endif /* _SIZE_T_ */ +#endif /* __SIZE_T */ +#endif /* _T_SIZE */ +#endif /* _T_SIZE_ */ +#endif /* _SYS_SIZE_T_H */ +#endif /* _SIZE_T */ +#endif /* __SIZE_T__ */ +#endif /* __size_t__ */ +#undef __need_size_t +#endif /* _STDDEF_H or __need_size_t. */ + + +/* Wide character type. + Locale-writers should change this as necessary to + be big enough to hold unique values not between 0 and 127, + and not (wchar_t) -1, for each defined multibyte character. */ + +/* Define this type if we are doing the whole job, + or if we want this type in particular. */ +#if defined (_STDDEF_H) || defined (__need_wchar_t) +#ifndef __wchar_t__ /* BeOS */ +#ifndef __WCHAR_T__ /* Cray Unicos/Mk */ +#ifndef _WCHAR_T +#ifndef _T_WCHAR_ +#ifndef _T_WCHAR +#ifndef __WCHAR_T +#ifndef _WCHAR_T_ +#ifndef _BSD_WCHAR_T_ +#ifndef _BSD_WCHAR_T_DEFINED_ /* Darwin */ +#ifndef _BSD_RUNE_T_DEFINED_ /* Darwin */ +#ifndef _WCHAR_T_DECLARED /* FreeBSD 5 */ +#ifndef _WCHAR_T_DEFINED_ +#ifndef _WCHAR_T_DEFINED +#ifndef _WCHAR_T_H +#ifndef ___int_wchar_t_h +#ifndef __INT_WCHAR_T_H +#ifndef _GCC_WCHAR_T +#define __wchar_t__ /* BeOS */ +#define __WCHAR_T__ /* Cray Unicos/Mk */ +#define _WCHAR_T +#define _T_WCHAR_ +#define _T_WCHAR +#define __WCHAR_T +#define _WCHAR_T_ +#define _BSD_WCHAR_T_ +#define _WCHAR_T_DEFINED_ +#define _WCHAR_T_DEFINED +#define _WCHAR_T_H +#define ___int_wchar_t_h +#define __INT_WCHAR_T_H +#define _GCC_WCHAR_T +#define _WCHAR_T_DECLARED + +/* On BSD/386 1.1, at least, machine/ansi.h defines _BSD_WCHAR_T_ + instead of _WCHAR_T_, and _BSD_RUNE_T_ (which, unlike the other + symbols in the _FOO_T_ family, stays defined even after its + corresponding type is defined). If we define wchar_t, then we + must undef _WCHAR_T_; for BSD/386 1.1 (and perhaps others), if + we undef _WCHAR_T_, then we must also define rune_t, since + headers like runetype.h assume that if machine/ansi.h is included, + and _BSD_WCHAR_T_ is not defined, then rune_t is available. + machine/ansi.h says, "Note that _WCHAR_T_ and _RUNE_T_ must be of + the same type." */ +#ifdef _BSD_WCHAR_T_ +#undef _BSD_WCHAR_T_ +#ifdef _BSD_RUNE_T_ +#if !defined (_ANSI_SOURCE) && !defined (_POSIX_SOURCE) +typedef _BSD_RUNE_T_ rune_t; +#define _BSD_WCHAR_T_DEFINED_ +#define _BSD_RUNE_T_DEFINED_ /* Darwin */ +#if defined (__FreeBSD__) && (__FreeBSD__ < 5) +/* Why is this file so hard to maintain properly? In contrast to + the comment above regarding BSD/386 1.1, on FreeBSD for as long + as the symbol has existed, _BSD_RUNE_T_ must not stay defined or + redundant typedefs will occur when stdlib.h is included after this file. */ +#undef _BSD_RUNE_T_ +#endif +#endif +#endif +#endif +/* FreeBSD 5 can't be handled well using "traditional" logic above + since it no longer defines _BSD_RUNE_T_ yet still desires to export + rune_t in some cases... */ +#if defined (__FreeBSD__) && (__FreeBSD__ >= 5) +#if !defined (_ANSI_SOURCE) && !defined (_POSIX_SOURCE) +#if __BSD_VISIBLE +#ifndef _RUNE_T_DECLARED +typedef __rune_t rune_t; +#define _RUNE_T_DECLARED +#endif +#endif +#endif +#endif + +#ifndef __WCHAR_TYPE__ +#define __WCHAR_TYPE__ int +#endif +#ifndef __cplusplus +typedef __WCHAR_TYPE__ wchar_t; +#endif +#endif +#endif +#endif +#endif +#endif +#endif +#endif /* _WCHAR_T_DECLARED */ +#endif /* _BSD_RUNE_T_DEFINED_ */ +#endif +#endif +#endif +#endif +#endif +#endif +#endif +#endif /* __WCHAR_T__ */ +#endif /* __wchar_t__ */ +#undef __need_wchar_t +#endif /* _STDDEF_H or __need_wchar_t. */ + +#if defined (__need_wint_t) +#ifndef _WINT_T +#define _WINT_T + +#ifndef __WINT_TYPE__ +#define __WINT_TYPE__ unsigned int +#endif +typedef __WINT_TYPE__ wint_t; +#endif +#undef __need_wint_t +#endif + +/* In 4.3bsd-net2, leave these undefined to indicate that size_t, etc. + are already defined. */ +/* BSD/OS 3.1 and FreeBSD [23].x require the MACHINE_ANSI_H check here. */ +/* NetBSD 5 requires the I386_ANSI_H and X86_64_ANSI_H checks here. */ +#if defined(_ANSI_H_) || defined(_MACHINE_ANSI_H_) || defined(_X86_64_ANSI_H_) || defined(_I386_ANSI_H_) +/* The references to _GCC_PTRDIFF_T_, _GCC_SIZE_T_, and _GCC_WCHAR_T_ + are probably typos and should be removed before 2.8 is released. */ +#ifdef _GCC_PTRDIFF_T_ +#undef _PTRDIFF_T_ +#undef _BSD_PTRDIFF_T_ +#endif +#ifdef _GCC_SIZE_T_ +#undef _SIZE_T_ +#undef _BSD_SIZE_T_ +#endif +#ifdef _GCC_WCHAR_T_ +#undef _WCHAR_T_ +#undef _BSD_WCHAR_T_ +#endif +/* The following ones are the real ones. */ +#ifdef _GCC_PTRDIFF_T +#undef _PTRDIFF_T_ +#undef _BSD_PTRDIFF_T_ +#endif +#ifdef _GCC_SIZE_T +#undef _SIZE_T_ +#undef _BSD_SIZE_T_ +#endif +#ifdef _GCC_WCHAR_T +#undef _WCHAR_T_ +#undef _BSD_WCHAR_T_ +#endif +#endif /* _ANSI_H_ || _MACHINE_ANSI_H_ || _X86_64_ANSI_H_ || _I386_ANSI_H_ */ + +#endif /* __sys_stdtypes_h */ + +/* A null pointer constant. */ + +#if defined (_STDDEF_H) || defined (__need_NULL) +#undef NULL /* in case <stdio.h> has defined it. */ +#ifdef __GNUG__ +#define NULL __null +#else /* G++ */ +#ifndef __cplusplus +#define NULL ((void *)0) +#else /* C++ */ +#define NULL 0 +#endif /* C++ */ +#endif /* G++ */ +#endif /* NULL not defined and <stddef.h> or need NULL. */ +#undef __need_NULL + +#ifdef _STDDEF_H + +/* Offset of member MEMBER in a struct of type TYPE. */ +#define offsetof(TYPE, MEMBER) __builtin_offsetof (TYPE, MEMBER) + +#if (defined (__STDC_VERSION__) && __STDC_VERSION__ >= 201112L) \ + || (defined(__cplusplus) && __cplusplus >= 201103L) +#ifndef _GCC_MAX_ALIGN_T +#define _GCC_MAX_ALIGN_T +/* Type whose alignment is supported in every context and is at least + as great as that of any standard type not using alignment + specifiers. */ +typedef struct { + long long __max_align_ll __attribute__((__aligned__(__alignof__(long long)))); + long double __max_align_ld __attribute__((__aligned__(__alignof__(long double)))); +} max_align_t; +#endif +#endif /* C11 or C++11. */ + +#if defined(__cplusplus) && __cplusplus >= 201103L +#ifndef _GXX_NULLPTR_T +#define _GXX_NULLPTR_T + typedef decltype(nullptr) nullptr_t; +#endif +#endif /* C++11. */ + +#endif /* _STDDEF_H was defined this time */ + +#endif /* !_STDDEF_H && !_STDDEF_H_ && !_ANSI_STDDEF_H && !__STDDEF_H__ + || __need_XXX was not defined before */
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdfix.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdfix.h new file mode 100644 index 0000000..93e759a --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdfix.h
@@ -0,0 +1,204 @@ +/* Copyright (C) 2007-2014 Free Software Foundation, Inc. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; either version 3, or (at your option) +any later version. + +GCC is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +Under Section 7 of GPL version 3, you are granted additional +permissions described in the GCC Runtime Library Exception, version +3.1, as published by the Free Software Foundation. + +You should have received a copy of the GNU General Public License and +a copy of the GCC Runtime Library Exception along with this program; +see the files COPYING3 and COPYING.RUNTIME respectively. If not, see +<http://www.gnu.org/licenses/>. */ + +/* ISO/IEC JTC1 SC22 WG14 N1169 + * Date: 2006-04-04 + * ISO/IEC TR 18037 + * Programming languages - C - Extensions to support embedded processors + */ + +#ifndef _STDFIX_H +#define _STDFIX_H + +/* 7.18a.1 Introduction. */ + +#undef fract +#undef accum +#undef sat +#define fract _Fract +#define accum _Accum +#define sat _Sat + +/* 7.18a.3 Precision macros. */ + +#undef SFRACT_FBIT +#undef SFRACT_MIN +#undef SFRACT_MAX +#undef SFRACT_EPSILON +#define SFRACT_FBIT __SFRACT_FBIT__ +#define SFRACT_MIN __SFRACT_MIN__ +#define SFRACT_MAX __SFRACT_MAX__ +#define SFRACT_EPSILON __SFRACT_EPSILON__ + +#undef USFRACT_FBIT +#undef USFRACT_MIN +#undef USFRACT_MAX +#undef USFRACT_EPSILON +#define USFRACT_FBIT __USFRACT_FBIT__ +#define USFRACT_MIN __USFRACT_MIN__ /* GCC extension. */ +#define USFRACT_MAX __USFRACT_MAX__ +#define USFRACT_EPSILON __USFRACT_EPSILON__ + +#undef FRACT_FBIT +#undef FRACT_MIN +#undef FRACT_MAX +#undef FRACT_EPSILON +#define FRACT_FBIT __FRACT_FBIT__ +#define FRACT_MIN __FRACT_MIN__ +#define FRACT_MAX __FRACT_MAX__ +#define FRACT_EPSILON __FRACT_EPSILON__ + +#undef UFRACT_FBIT +#undef UFRACT_MIN +#undef UFRACT_MAX +#undef UFRACT_EPSILON +#define UFRACT_FBIT __UFRACT_FBIT__ +#define UFRACT_MIN __UFRACT_MIN__ /* GCC extension. */ +#define UFRACT_MAX __UFRACT_MAX__ +#define UFRACT_EPSILON __UFRACT_EPSILON__ + +#undef LFRACT_FBIT +#undef LFRACT_MIN +#undef LFRACT_MAX +#undef LFRACT_EPSILON +#define LFRACT_FBIT __LFRACT_FBIT__ +#define LFRACT_MIN __LFRACT_MIN__ +#define LFRACT_MAX __LFRACT_MAX__ +#define LFRACT_EPSILON __LFRACT_EPSILON__ + +#undef ULFRACT_FBIT +#undef ULFRACT_MIN +#undef ULFRACT_MAX +#undef ULFRACT_EPSILON +#define ULFRACT_FBIT __ULFRACT_FBIT__ +#define ULFRACT_MIN __ULFRACT_MIN__ /* GCC extension. */ +#define ULFRACT_MAX __ULFRACT_MAX__ +#define ULFRACT_EPSILON __ULFRACT_EPSILON__ + +#undef LLFRACT_FBIT +#undef LLFRACT_MIN +#undef LLFRACT_MAX +#undef LLFRACT_EPSILON +#define LLFRACT_FBIT __LLFRACT_FBIT__ /* GCC extension. */ +#define LLFRACT_MIN __LLFRACT_MIN__ /* GCC extension. */ +#define LLFRACT_MAX __LLFRACT_MAX__ /* GCC extension. */ +#define LLFRACT_EPSILON __LLFRACT_EPSILON__ /* GCC extension. */ + +#undef ULLFRACT_FBIT +#undef ULLFRACT_MIN +#undef ULLFRACT_MAX +#undef ULLFRACT_EPSILON +#define ULLFRACT_FBIT __ULLFRACT_FBIT__ /* GCC extension. */ +#define ULLFRACT_MIN __ULLFRACT_MIN__ /* GCC extension. */ +#define ULLFRACT_MAX __ULLFRACT_MAX__ /* GCC extension. */ +#define ULLFRACT_EPSILON __ULLFRACT_EPSILON__ /* GCC extension. */ + +#undef SACCUM_FBIT +#undef SACCUM_IBIT +#undef SACCUM_MIN +#undef SACCUM_MAX +#undef SACCUM_EPSILON +#define SACCUM_FBIT __SACCUM_FBIT__ +#define SACCUM_IBIT __SACCUM_IBIT__ +#define SACCUM_MIN __SACCUM_MIN__ +#define SACCUM_MAX __SACCUM_MAX__ +#define SACCUM_EPSILON __SACCUM_EPSILON__ + +#undef USACCUM_FBIT +#undef USACCUM_IBIT +#undef USACCUM_MIN +#undef USACCUM_MAX +#undef USACCUM_EPSILON +#define USACCUM_FBIT __USACCUM_FBIT__ +#define USACCUM_IBIT __USACCUM_IBIT__ +#define USACCUM_MIN __USACCUM_MIN__ /* GCC extension. */ +#define USACCUM_MAX __USACCUM_MAX__ +#define USACCUM_EPSILON __USACCUM_EPSILON__ + +#undef ACCUM_FBIT +#undef ACCUM_IBIT +#undef ACCUM_MIN +#undef ACCUM_MAX +#undef ACCUM_EPSILON +#define ACCUM_FBIT __ACCUM_FBIT__ +#define ACCUM_IBIT __ACCUM_IBIT__ +#define ACCUM_MIN __ACCUM_MIN__ +#define ACCUM_MAX __ACCUM_MAX__ +#define ACCUM_EPSILON __ACCUM_EPSILON__ + +#undef UACCUM_FBIT +#undef UACCUM_IBIT +#undef UACCUM_MIN +#undef UACCUM_MAX +#undef UACCUM_EPSILON +#define UACCUM_FBIT __UACCUM_FBIT__ +#define UACCUM_IBIT __UACCUM_IBIT__ +#define UACCUM_MIN __UACCUM_MIN__ /* GCC extension. */ +#define UACCUM_MAX __UACCUM_MAX__ +#define UACCUM_EPSILON __UACCUM_EPSILON__ + +#undef LACCUM_FBIT +#undef LACCUM_IBIT +#undef LACCUM_MIN +#undef LACCUM_MAX +#undef LACCUM_EPSILON +#define LACCUM_FBIT __LACCUM_FBIT__ +#define LACCUM_IBIT __LACCUM_IBIT__ +#define LACCUM_MIN __LACCUM_MIN__ +#define LACCUM_MAX __LACCUM_MAX__ +#define LACCUM_EPSILON __LACCUM_EPSILON__ + +#undef ULACCUM_FBIT +#undef ULACCUM_IBIT +#undef ULACCUM_MIN +#undef ULACCUM_MAX +#undef ULACCUM_EPSILON +#define ULACCUM_FBIT __ULACCUM_FBIT__ +#define ULACCUM_IBIT __ULACCUM_IBIT__ +#define ULACCUM_MIN __ULACCUM_MIN__ /* GCC extension. */ +#define ULACCUM_MAX __ULACCUM_MAX__ +#define ULACCUM_EPSILON __ULACCUM_EPSILON__ + +#undef LLACCUM_FBIT +#undef LLACCUM_IBIT +#undef LLACCUM_MIN +#undef LLACCUM_MAX +#undef LLACCUM_EPSILON +#define LLACCUM_FBIT __LLACCUM_FBIT__ /* GCC extension. */ +#define LLACCUM_IBIT __LLACCUM_IBIT__ /* GCC extension. */ +#define LLACCUM_MIN __LLACCUM_MIN__ /* GCC extension. */ +#define LLACCUM_MAX __LLACCUM_MAX__ /* GCC extension. */ +#define LLACCUM_EPSILON __LLACCUM_EPSILON__ /* GCC extension. */ + +#undef ULLACCUM_FBIT +#undef ULLACCUM_IBIT +#undef ULLACCUM_MIN +#undef ULLACCUM_MAX +#undef ULLACCUM_EPSILON +#define ULLACCUM_FBIT __ULLACCUM_FBIT__ /* GCC extension. */ +#define ULLACCUM_IBIT __ULLACCUM_IBIT__ /* GCC extension. */ +#define ULLACCUM_MIN __ULLACCUM_MIN__ /* GCC extension. */ +#define ULLACCUM_MAX __ULLACCUM_MAX__ /* GCC extension. */ +#define ULLACCUM_EPSILON __ULLACCUM_EPSILON__ /* GCC extension. */ + +#endif /* _STDFIX_H */
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdint-gcc.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdint-gcc.h new file mode 100644 index 0000000..1470cea --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdint-gcc.h
@@ -0,0 +1,263 @@ +/* Copyright (C) 2008-2014 Free Software Foundation, Inc. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; either version 3, or (at your option) +any later version. + +GCC is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +Under Section 7 of GPL version 3, you are granted additional +permissions described in the GCC Runtime Library Exception, version +3.1, as published by the Free Software Foundation. + +You should have received a copy of the GNU General Public License and +a copy of the GCC Runtime Library Exception along with this program; +see the files COPYING3 and COPYING.RUNTIME respectively. If not, see +<http://www.gnu.org/licenses/>. */ + +/* + * ISO C Standard: 7.18 Integer types <stdint.h> + */ + +#ifndef _GCC_STDINT_H +#define _GCC_STDINT_H + +/* 7.8.1.1 Exact-width integer types */ + +#ifdef __INT8_TYPE__ +typedef __INT8_TYPE__ int8_t; +#endif +#ifdef __INT16_TYPE__ +typedef __INT16_TYPE__ int16_t; +#endif +#ifdef __INT32_TYPE__ +typedef __INT32_TYPE__ int32_t; +#endif +#ifdef __INT64_TYPE__ +typedef __INT64_TYPE__ int64_t; +#endif +#ifdef __UINT8_TYPE__ +typedef __UINT8_TYPE__ uint8_t; +#endif +#ifdef __UINT16_TYPE__ +typedef __UINT16_TYPE__ uint16_t; +#endif +#ifdef __UINT32_TYPE__ +typedef __UINT32_TYPE__ uint32_t; +#endif +#ifdef __UINT64_TYPE__ +typedef __UINT64_TYPE__ uint64_t; +#endif + +/* 7.8.1.2 Minimum-width integer types */ + +typedef __INT_LEAST8_TYPE__ int_least8_t; +typedef __INT_LEAST16_TYPE__ int_least16_t; +typedef __INT_LEAST32_TYPE__ int_least32_t; +typedef __INT_LEAST64_TYPE__ int_least64_t; +typedef __UINT_LEAST8_TYPE__ uint_least8_t; +typedef __UINT_LEAST16_TYPE__ uint_least16_t; +typedef __UINT_LEAST32_TYPE__ uint_least32_t; +typedef __UINT_LEAST64_TYPE__ uint_least64_t; + +/* 7.8.1.3 Fastest minimum-width integer types */ + +typedef __INT_FAST8_TYPE__ int_fast8_t; +typedef __INT_FAST16_TYPE__ int_fast16_t; +typedef __INT_FAST32_TYPE__ int_fast32_t; +typedef __INT_FAST64_TYPE__ int_fast64_t; +typedef __UINT_FAST8_TYPE__ uint_fast8_t; +typedef __UINT_FAST16_TYPE__ uint_fast16_t; +typedef __UINT_FAST32_TYPE__ uint_fast32_t; +typedef __UINT_FAST64_TYPE__ uint_fast64_t; + +/* 7.8.1.4 Integer types capable of holding object pointers */ + +#ifdef __INTPTR_TYPE__ +typedef __INTPTR_TYPE__ intptr_t; +#endif +#ifdef __UINTPTR_TYPE__ +typedef __UINTPTR_TYPE__ uintptr_t; +#endif + +/* 7.8.1.5 Greatest-width integer types */ + +typedef __INTMAX_TYPE__ intmax_t; +typedef __UINTMAX_TYPE__ uintmax_t; + +#if (!defined __cplusplus || __cplusplus >= 201103L \ + || defined __STDC_LIMIT_MACROS) + +/* 7.18.2 Limits of specified-width integer types */ + +#ifdef __INT8_MAX__ +# undef INT8_MAX +# define INT8_MAX __INT8_MAX__ +# undef INT8_MIN +# define INT8_MIN (-INT8_MAX - 1) +#endif +#ifdef __UINT8_MAX__ +# undef UINT8_MAX +# define UINT8_MAX __UINT8_MAX__ +#endif +#ifdef __INT16_MAX__ +# undef INT16_MAX +# define INT16_MAX __INT16_MAX__ +# undef INT16_MIN +# define INT16_MIN (-INT16_MAX - 1) +#endif +#ifdef __UINT16_MAX__ +# undef UINT16_MAX +# define UINT16_MAX __UINT16_MAX__ +#endif +#ifdef __INT32_MAX__ +# undef INT32_MAX +# define INT32_MAX __INT32_MAX__ +# undef INT32_MIN +# define INT32_MIN (-INT32_MAX - 1) +#endif +#ifdef __UINT32_MAX__ +# undef UINT32_MAX +# define UINT32_MAX __UINT32_MAX__ +#endif +#ifdef __INT64_MAX__ +# undef INT64_MAX +# define INT64_MAX __INT64_MAX__ +# undef INT64_MIN +# define INT64_MIN (-INT64_MAX - 1) +#endif +#ifdef __UINT64_MAX__ +# undef UINT64_MAX +# define UINT64_MAX __UINT64_MAX__ +#endif + +#undef INT_LEAST8_MAX +#define INT_LEAST8_MAX __INT_LEAST8_MAX__ +#undef INT_LEAST8_MIN +#define INT_LEAST8_MIN (-INT_LEAST8_MAX - 1) +#undef UINT_LEAST8_MAX +#define UINT_LEAST8_MAX __UINT_LEAST8_MAX__ +#undef INT_LEAST16_MAX +#define INT_LEAST16_MAX __INT_LEAST16_MAX__ +#undef INT_LEAST16_MIN +#define INT_LEAST16_MIN (-INT_LEAST16_MAX - 1) +#undef UINT_LEAST16_MAX +#define UINT_LEAST16_MAX __UINT_LEAST16_MAX__ +#undef INT_LEAST32_MAX +#define INT_LEAST32_MAX __INT_LEAST32_MAX__ +#undef INT_LEAST32_MIN +#define INT_LEAST32_MIN (-INT_LEAST32_MAX - 1) +#undef UINT_LEAST32_MAX +#define UINT_LEAST32_MAX __UINT_LEAST32_MAX__ +#undef INT_LEAST64_MAX +#define INT_LEAST64_MAX __INT_LEAST64_MAX__ +#undef INT_LEAST64_MIN +#define INT_LEAST64_MIN (-INT_LEAST64_MAX - 1) +#undef UINT_LEAST64_MAX +#define UINT_LEAST64_MAX __UINT_LEAST64_MAX__ + +#undef INT_FAST8_MAX +#define INT_FAST8_MAX __INT_FAST8_MAX__ +#undef INT_FAST8_MIN +#define INT_FAST8_MIN (-INT_FAST8_MAX - 1) +#undef UINT_FAST8_MAX +#define UINT_FAST8_MAX __UINT_FAST8_MAX__ +#undef INT_FAST16_MAX +#define INT_FAST16_MAX __INT_FAST16_MAX__ +#undef INT_FAST16_MIN +#define INT_FAST16_MIN (-INT_FAST16_MAX - 1) +#undef UINT_FAST16_MAX +#define UINT_FAST16_MAX __UINT_FAST16_MAX__ +#undef INT_FAST32_MAX +#define INT_FAST32_MAX __INT_FAST32_MAX__ +#undef INT_FAST32_MIN +#define INT_FAST32_MIN (-INT_FAST32_MAX - 1) +#undef UINT_FAST32_MAX +#define UINT_FAST32_MAX __UINT_FAST32_MAX__ +#undef INT_FAST64_MAX +#define INT_FAST64_MAX __INT_FAST64_MAX__ +#undef INT_FAST64_MIN +#define INT_FAST64_MIN (-INT_FAST64_MAX - 1) +#undef UINT_FAST64_MAX +#define UINT_FAST64_MAX __UINT_FAST64_MAX__ + +#ifdef __INTPTR_MAX__ +# undef INTPTR_MAX +# define INTPTR_MAX __INTPTR_MAX__ +# undef INTPTR_MIN +# define INTPTR_MIN (-INTPTR_MAX - 1) +#endif +#ifdef __UINTPTR_MAX__ +# undef UINTPTR_MAX +# define UINTPTR_MAX __UINTPTR_MAX__ +#endif + +#undef INTMAX_MAX +#define INTMAX_MAX __INTMAX_MAX__ +#undef INTMAX_MIN +#define INTMAX_MIN (-INTMAX_MAX - 1) +#undef UINTMAX_MAX +#define UINTMAX_MAX __UINTMAX_MAX__ + +/* 7.18.3 Limits of other integer types */ + +#undef PTRDIFF_MAX +#define PTRDIFF_MAX __PTRDIFF_MAX__ +#undef PTRDIFF_MIN +#define PTRDIFF_MIN (-PTRDIFF_MAX - 1) + +#undef SIG_ATOMIC_MAX +#define SIG_ATOMIC_MAX __SIG_ATOMIC_MAX__ +#undef SIG_ATOMIC_MIN +#define SIG_ATOMIC_MIN __SIG_ATOMIC_MIN__ + +#undef SIZE_MAX +#define SIZE_MAX __SIZE_MAX__ + +#undef WCHAR_MAX +#define WCHAR_MAX __WCHAR_MAX__ +#undef WCHAR_MIN +#define WCHAR_MIN __WCHAR_MIN__ + +#undef WINT_MAX +#define WINT_MAX __WINT_MAX__ +#undef WINT_MIN +#define WINT_MIN __WINT_MIN__ + +#endif /* (!defined __cplusplus || __cplusplus >= 201103L + || defined __STDC_LIMIT_MACROS) */ + +#if (!defined __cplusplus || __cplusplus >= 201103L \ + || defined __STDC_CONSTANT_MACROS) + +#undef INT8_C +#define INT8_C(c) __INT8_C(c) +#undef INT16_C +#define INT16_C(c) __INT16_C(c) +#undef INT32_C +#define INT32_C(c) __INT32_C(c) +#undef INT64_C +#define INT64_C(c) __INT64_C(c) +#undef UINT8_C +#define UINT8_C(c) __UINT8_C(c) +#undef UINT16_C +#define UINT16_C(c) __UINT16_C(c) +#undef UINT32_C +#define UINT32_C(c) __UINT32_C(c) +#undef UINT64_C +#define UINT64_C(c) __UINT64_C(c) +#undef INTMAX_C +#define INTMAX_C(c) __INTMAX_C(c) +#undef UINTMAX_C +#define UINTMAX_C(c) __UINTMAX_C(c) + +#endif /* (!defined __cplusplus || __cplusplus >= 201103L + || defined __STDC_CONSTANT_MACROS) */ + +#endif /* _GCC_STDINT_H */
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdint.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdint.h new file mode 100644 index 0000000..83b6f70 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdint.h
@@ -0,0 +1,14 @@ +#ifndef _GCC_WRAP_STDINT_H +#if __STDC_HOSTED__ +# if defined __cplusplus && __cplusplus >= 201103L +# undef __STDC_LIMIT_MACROS +# define __STDC_LIMIT_MACROS +# undef __STDC_CONSTANT_MACROS +# define __STDC_CONSTANT_MACROS +# endif +# include_next <stdint.h> +#else +# include "stdint-gcc.h" +#endif +#define _GCC_WRAP_STDINT_H +#endif
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdnoreturn.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdnoreturn.h new file mode 100644 index 0000000..0134137 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/stdnoreturn.h
@@ -0,0 +1,35 @@ +/* Copyright (C) 2011-2014 Free Software Foundation, Inc. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; either version 3, or (at your option) +any later version. + +GCC is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +Under Section 7 of GPL version 3, you are granted additional +permissions described in the GCC Runtime Library Exception, version +3.1, as published by the Free Software Foundation. + +You should have received a copy of the GNU General Public License and +a copy of the GCC Runtime Library Exception along with this program; +see the files COPYING3 and COPYING.RUNTIME respectively. If not, see +<http://www.gnu.org/licenses/>. */ + +/* ISO C1X: 7.23 _Noreturn <stdnoreturn.h>. */ + +#ifndef _STDNORETURN_H +#define _STDNORETURN_H + +#ifndef __cplusplus + +#define noreturn _Noreturn + +#endif + +#endif /* stdnoreturn.h */
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/unwind.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/unwind.h new file mode 100644 index 0000000..d351fb9 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/unwind.h
@@ -0,0 +1,293 @@ +/* Exception handling and frame unwind runtime interface routines. + Copyright (C) 2001-2014 Free Software Foundation, Inc. + + This file is part of GCC. + + GCC is free software; you can redistribute it and/or modify it + under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 3, or (at your option) + any later version. + + GCC is distributed in the hope that it will be useful, but WITHOUT + ANY WARRANTY; without even the implied warranty of MERCHANTABILITY + or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public + License for more details. + + Under Section 7 of GPL version 3, you are granted additional + permissions described in the GCC Runtime Library Exception, version + 3.1, as published by the Free Software Foundation. + + You should have received a copy of the GNU General Public License and + a copy of the GCC Runtime Library Exception along with this program; + see the files COPYING3 and COPYING.RUNTIME respectively. If not, see + <http://www.gnu.org/licenses/>. */ + +/* This is derived from the C++ ABI for IA-64. Where we diverge + for cross-architecture compatibility are noted with "@@@". */ + +#ifndef _UNWIND_H +#define _UNWIND_H + +#if defined (__SEH__) && !defined (__USING_SJLJ_EXCEPTIONS__) +/* Only for _GCC_specific_handler. */ +#include <windows.h> +#endif + +#ifndef HIDE_EXPORTS +#pragma GCC visibility push(default) +#endif + +#ifdef __cplusplus +extern "C" { +#endif + +/* Level 1: Base ABI */ + +/* @@@ The IA-64 ABI uses uint64 throughout. Most places this is + inefficient for 32-bit and smaller machines. */ +typedef unsigned _Unwind_Word __attribute__((__mode__(__unwind_word__))); +typedef signed _Unwind_Sword __attribute__((__mode__(__unwind_word__))); +#if defined(__ia64__) && defined(__hpux__) +typedef unsigned _Unwind_Ptr __attribute__((__mode__(__word__))); +#else +typedef unsigned _Unwind_Ptr __attribute__((__mode__(__pointer__))); +#endif +typedef unsigned _Unwind_Internal_Ptr __attribute__((__mode__(__pointer__))); + +/* @@@ The IA-64 ABI uses a 64-bit word to identify the producer and + consumer of an exception. We'll go along with this for now even on + 32-bit machines. We'll need to provide some other option for + 16-bit machines and for machines with > 8 bits per byte. */ +typedef unsigned _Unwind_Exception_Class __attribute__((__mode__(__DI__))); + +/* The unwind interface uses reason codes in several contexts to + identify the reasons for failures or other actions. */ +typedef enum +{ + _URC_NO_REASON = 0, + _URC_FOREIGN_EXCEPTION_CAUGHT = 1, + _URC_FATAL_PHASE2_ERROR = 2, + _URC_FATAL_PHASE1_ERROR = 3, + _URC_NORMAL_STOP = 4, + _URC_END_OF_STACK = 5, + _URC_HANDLER_FOUND = 6, + _URC_INSTALL_CONTEXT = 7, + _URC_CONTINUE_UNWIND = 8 +} _Unwind_Reason_Code; + + +/* The unwind interface uses a pointer to an exception header object + as its representation of an exception being thrown. In general, the + full representation of an exception object is language- and + implementation-specific, but it will be prefixed by a header + understood by the unwind interface. */ + +struct _Unwind_Exception; + +typedef void (*_Unwind_Exception_Cleanup_Fn) (_Unwind_Reason_Code, + struct _Unwind_Exception *); + +struct _Unwind_Exception +{ + _Unwind_Exception_Class exception_class; + _Unwind_Exception_Cleanup_Fn exception_cleanup; + +#if !defined (__USING_SJLJ_EXCEPTIONS__) && defined (__SEH__) + _Unwind_Word private_[6]; +#else + _Unwind_Word private_1; + _Unwind_Word private_2; +#endif + + /* @@@ The IA-64 ABI says that this structure must be double-word aligned. + Taking that literally does not make much sense generically. Instead we + provide the maximum alignment required by any type for the machine. */ +} __attribute__((__aligned__)); + + +/* The ACTIONS argument to the personality routine is a bitwise OR of one + or more of the following constants. */ +typedef int _Unwind_Action; + +#define _UA_SEARCH_PHASE 1 +#define _UA_CLEANUP_PHASE 2 +#define _UA_HANDLER_FRAME 4 +#define _UA_FORCE_UNWIND 8 +#define _UA_END_OF_STACK 16 + +/* The target can override this macro to define any back-end-specific + attributes required for the lowest-level stack frame. */ +#ifndef LIBGCC2_UNWIND_ATTRIBUTE +#define LIBGCC2_UNWIND_ATTRIBUTE +#endif + +/* This is an opaque type used to refer to a system-specific data + structure used by the system unwinder. This context is created and + destroyed by the system, and passed to the personality routine + during unwinding. */ +struct _Unwind_Context; + +/* Raise an exception, passing along the given exception object. */ +extern _Unwind_Reason_Code LIBGCC2_UNWIND_ATTRIBUTE +_Unwind_RaiseException (struct _Unwind_Exception *); + +/* Raise an exception for forced unwinding. */ + +typedef _Unwind_Reason_Code (*_Unwind_Stop_Fn) + (int, _Unwind_Action, _Unwind_Exception_Class, + struct _Unwind_Exception *, struct _Unwind_Context *, void *); + +extern _Unwind_Reason_Code LIBGCC2_UNWIND_ATTRIBUTE +_Unwind_ForcedUnwind (struct _Unwind_Exception *, _Unwind_Stop_Fn, void *); + +/* Helper to invoke the exception_cleanup routine. */ +extern void _Unwind_DeleteException (struct _Unwind_Exception *); + +/* Resume propagation of an existing exception. This is used after + e.g. executing cleanup code, and not to implement rethrowing. */ +extern void LIBGCC2_UNWIND_ATTRIBUTE +_Unwind_Resume (struct _Unwind_Exception *); + +/* @@@ Resume propagation of a FORCE_UNWIND exception, or to rethrow + a normal exception that was handled. */ +extern _Unwind_Reason_Code LIBGCC2_UNWIND_ATTRIBUTE +_Unwind_Resume_or_Rethrow (struct _Unwind_Exception *); + +/* @@@ Use unwind data to perform a stack backtrace. The trace callback + is called for every stack frame in the call chain, but no cleanup + actions are performed. */ +typedef _Unwind_Reason_Code (*_Unwind_Trace_Fn) + (struct _Unwind_Context *, void *); + +extern _Unwind_Reason_Code LIBGCC2_UNWIND_ATTRIBUTE +_Unwind_Backtrace (_Unwind_Trace_Fn, void *); + +/* These functions are used for communicating information about the unwind + context (i.e. the unwind descriptors and the user register state) between + the unwind library and the personality routine and landing pad. Only + selected registers may be manipulated. */ + +extern _Unwind_Word _Unwind_GetGR (struct _Unwind_Context *, int); +extern void _Unwind_SetGR (struct _Unwind_Context *, int, _Unwind_Word); + +extern _Unwind_Ptr _Unwind_GetIP (struct _Unwind_Context *); +extern _Unwind_Ptr _Unwind_GetIPInfo (struct _Unwind_Context *, int *); +extern void _Unwind_SetIP (struct _Unwind_Context *, _Unwind_Ptr); + +/* @@@ Retrieve the CFA of the given context. */ +extern _Unwind_Word _Unwind_GetCFA (struct _Unwind_Context *); + +extern void *_Unwind_GetLanguageSpecificData (struct _Unwind_Context *); + +extern _Unwind_Ptr _Unwind_GetRegionStart (struct _Unwind_Context *); + + +/* The personality routine is the function in the C++ (or other language) + runtime library which serves as an interface between the system unwind + library and language-specific exception handling semantics. It is + specific to the code fragment described by an unwind info block, and + it is always referenced via the pointer in the unwind info block, and + hence it has no ABI-specified name. + + Note that this implies that two different C++ implementations can + use different names, and have different contents in the language + specific data area. Moreover, that the language specific data + area contains no version info because name of the function invoked + provides more effective versioning by detecting at link time the + lack of code to handle the different data format. */ + +typedef _Unwind_Reason_Code (*_Unwind_Personality_Fn) + (int, _Unwind_Action, _Unwind_Exception_Class, + struct _Unwind_Exception *, struct _Unwind_Context *); + +/* @@@ The following alternate entry points are for setjmp/longjmp + based unwinding. */ + +struct SjLj_Function_Context; +extern void _Unwind_SjLj_Register (struct SjLj_Function_Context *); +extern void _Unwind_SjLj_Unregister (struct SjLj_Function_Context *); + +extern _Unwind_Reason_Code LIBGCC2_UNWIND_ATTRIBUTE +_Unwind_SjLj_RaiseException (struct _Unwind_Exception *); +extern _Unwind_Reason_Code LIBGCC2_UNWIND_ATTRIBUTE +_Unwind_SjLj_ForcedUnwind (struct _Unwind_Exception *, _Unwind_Stop_Fn, void *); +extern void LIBGCC2_UNWIND_ATTRIBUTE +_Unwind_SjLj_Resume (struct _Unwind_Exception *); +extern _Unwind_Reason_Code LIBGCC2_UNWIND_ATTRIBUTE +_Unwind_SjLj_Resume_or_Rethrow (struct _Unwind_Exception *); + +/* @@@ The following provide access to the base addresses for text + and data-relative addressing in the LDSA. In order to stay link + compatible with the standard ABI for IA-64, we inline these. */ + +#ifdef __ia64__ +#include <stdlib.h> + +static inline _Unwind_Ptr +_Unwind_GetDataRelBase (struct _Unwind_Context *_C) +{ + /* The GP is stored in R1. */ + return _Unwind_GetGR (_C, 1); +} + +static inline _Unwind_Ptr +_Unwind_GetTextRelBase (struct _Unwind_Context *_C __attribute__ ((__unused__))) +{ + abort (); + return 0; +} + +/* @@@ Retrieve the Backing Store Pointer of the given context. */ +extern _Unwind_Word _Unwind_GetBSP (struct _Unwind_Context *); +#else +extern _Unwind_Ptr _Unwind_GetDataRelBase (struct _Unwind_Context *); +extern _Unwind_Ptr _Unwind_GetTextRelBase (struct _Unwind_Context *); +#endif + +/* @@@ Given an address, return the entry point of the function that + contains it. */ +extern void * _Unwind_FindEnclosingFunction (void *pc); + +#ifndef __SIZEOF_LONG__ + #error "__SIZEOF_LONG__ macro not defined" +#endif + +#ifndef __SIZEOF_POINTER__ + #error "__SIZEOF_POINTER__ macro not defined" +#endif + + +/* leb128 type numbers have a potentially unlimited size. + The target of the following definitions of _sleb128_t and _uleb128_t + is to have efficient data types large enough to hold the leb128 type + numbers used in the unwind code. + Mostly these types will simply be defined to long and unsigned long + except when a unsigned long data type on the target machine is not + capable of storing a pointer. */ + +#if __SIZEOF_LONG__ >= __SIZEOF_POINTER__ + typedef long _sleb128_t; + typedef unsigned long _uleb128_t; +#elif __SIZEOF_LONG_LONG__ >= __SIZEOF_POINTER__ + typedef long long _sleb128_t; + typedef unsigned long long _uleb128_t; +#else +# error "What type shall we use for _sleb128_t?" +#endif + +#if defined (__SEH__) && !defined (__USING_SJLJ_EXCEPTIONS__) +/* Handles the mapping from SEH to GCC interfaces. */ +EXCEPTION_DISPOSITION _GCC_specific_handler (PEXCEPTION_RECORD, void *, + PCONTEXT, PDISPATCHER_CONTEXT, + _Unwind_Personality_Fn); +#endif + +#ifdef __cplusplus +} +#endif + +#ifndef HIDE_EXPORTS +#pragma GCC visibility pop +#endif + +#endif /* unwind.h */
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/varargs.h b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/varargs.h new file mode 100644 index 0000000..4b9803e --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/include/varargs.h
@@ -0,0 +1,7 @@ +#ifndef _VARARGS_H +#define _VARARGS_H + +#error "GCC no longer implements <varargs.h>." +#error "Revise your code to use <stdarg.h>." + +#endif
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/libgcc.a b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/libgcc.a new file mode 100644 index 0000000..a177043 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/libgcc.a Binary files differ
diff --git a/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/libgcov.a b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/libgcov.a new file mode 100644 index 0000000..211cd11 --- /dev/null +++ b/aarch64-linux-android-4.9/lib/gcc/aarch64-linux-android/4.9.x/libgcov.a Binary files differ
diff --git a/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/cc1 b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/cc1 new file mode 100755 index 0000000..3fecccb --- /dev/null +++ b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/cc1 Binary files differ
diff --git a/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/cc1plus b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/cc1plus new file mode 100755 index 0000000..3771feb --- /dev/null +++ b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/cc1plus Binary files differ
diff --git a/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/collect2 b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/collect2 new file mode 100755 index 0000000..1ea79a3 --- /dev/null +++ b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/collect2 Binary files differ
diff --git a/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/libfunction_reordering_plugin.so b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/libfunction_reordering_plugin.so new file mode 120000 index 0000000..6818e7a --- /dev/null +++ b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/libfunction_reordering_plugin.so
@@ -0,0 +1 @@ +libfunction_reordering_plugin.so.0.0.0 \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/libfunction_reordering_plugin.so.0 b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/libfunction_reordering_plugin.so.0 new file mode 120000 index 0000000..6818e7a --- /dev/null +++ b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/libfunction_reordering_plugin.so.0
@@ -0,0 +1 @@ +libfunction_reordering_plugin.so.0.0.0 \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/libfunction_reordering_plugin.so.0.0.0 b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/libfunction_reordering_plugin.so.0.0.0 new file mode 100755 index 0000000..64fb21f --- /dev/null +++ b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/libfunction_reordering_plugin.so.0.0.0 Binary files differ
diff --git a/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/liblto_plugin.so b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/liblto_plugin.so new file mode 120000 index 0000000..f25ba88 --- /dev/null +++ b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/liblto_plugin.so
@@ -0,0 +1 @@ +liblto_plugin.so.0.0.0 \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/liblto_plugin.so.0 b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/liblto_plugin.so.0 new file mode 120000 index 0000000..f25ba88 --- /dev/null +++ b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/liblto_plugin.so.0
@@ -0,0 +1 @@ +liblto_plugin.so.0.0.0 \ No newline at end of file
diff --git a/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/liblto_plugin.so.0.0.0 b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/liblto_plugin.so.0.0.0 new file mode 100755 index 0000000..94b74eb --- /dev/null +++ b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/liblto_plugin.so.0.0.0 Binary files differ
diff --git a/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/lto-wrapper b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/lto-wrapper new file mode 100755 index 0000000..a59909b --- /dev/null +++ b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/lto-wrapper Binary files differ
diff --git a/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/lto1 b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/lto1 new file mode 100755 index 0000000..44c178d --- /dev/null +++ b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/lto1 Binary files differ
diff --git a/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/plugin/gengtype b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/plugin/gengtype new file mode 100755 index 0000000..99cf1d0 --- /dev/null +++ b/aarch64-linux-android-4.9/libexec/gcc/aarch64-linux-android/4.9.x/plugin/gengtype Binary files differ
diff --git a/aarch64-linux-android-4.9/repo.prop b/aarch64-linux-android-4.9/repo.prop new file mode 100644 index 0000000..594eb9a --- /dev/null +++ b/aarch64-linux-android-4.9/repo.prop
@@ -0,0 +1,17 @@ +platform/manifest 29325e7007f581fc689a14b68b0feb04f52e3d0f +platform/ndk dcc0b23ef2681dfe06c101a271a9ccb982288638 +platform/prebuilts/gcc/darwin-x86/host/headers 4ac4f7cc41cf3c9e36fc3d6cf37fd1cfa9587a68 +platform/prebuilts/gcc/darwin-x86/host/i686-apple-darwin-4.2.1 ec5aa66aaa4964c27564d0ec84dc1f18a2d72b7e +platform/prebuilts/gcc/linux-x86/host/x86_64-linux-glibc2.11-4.8 1273431a189717842f033573eb8c777e13dd88b7 +platform/prebuilts/ndk 9cf98827acc9a95857ebdab2cd0d15e8a647509e +toolchain/binutils 066607388945f542727bd5035fb8d84bfd798034 +toolchain/build f280657461aee54b6d2807881d8a77832f4e794c +toolchain/cloog 604793eab97d360aef729f064674569ee6dbf3e1 +toolchain/expat 40172a0ae9d40a068f1e1a48ffcf6a1ccf765ed5 +toolchain/gcc 1641488ea2548cf9c5e61ef5a2f914aa496f9870 +toolchain/gmp b2acd5dbf47868ac5b5bc844e16d2cadcbd4c810 +toolchain/isl 0ccf95726af8ce58ad61ff474addbce3a31ba99c +toolchain/mpc 835d16e92eed875638a8b5d552034c3b1aae045b +toolchain/mpfr de979fc377db766591e7feaf052f0de59be46e76 +toolchain/ppl 979062d362bc5a1c00804237b408b19b4618fb24 +toolchain/sed 45df23d6dc8b51ea5cd903d023c10fd7d72415b9