view src/share/vm/asm/assembler.hpp @ 10905:f57189b7648d

8257192: Integrate AArch64 JIT port into 8u 7009641: Don't fail VM when CodeCache is full 8073108: [AArch64] Use x86 and SPARC CPU instructions for GHASH acceleration 8130309: Need to bailout cleanly if creation of stubs fails when codecache is out of space (AArch64 changes) 8131779: AARCH64: add Montgomery multiply intrinsic 8132875: AArch64: Fix error introduced into AArch64 CodeCache by commit for 8130309 8135018: AARCH64: Missing memory barriers for CMS collector 8145320: Create unsafe_arraycopy and generic_arraycopy for AArch64 8148328: aarch64: redundant lsr instructions in stub code. 8148783: aarch64: SEGV running SpecJBB2013 8148948: aarch64: generate_copy_longs calls align() incorrectly 8149080: AArch64: Recognise disjoint array copy in stub code 8149365: aarch64: memory copy does not prefetch on backwards copy 8149907: aarch64: use load/store pair instructions in call_stub 8150038: aarch64: make use of CBZ and CBNZ when comparing narrow pointer with zero 8150045: arraycopy causes segfaults in SATB during garbage collection 8150082: aarch64: optimise small array copy 8150229: aarch64: pipeline class for several instructions is not set correctly 8150313: aarch64: optimise array copy using SIMD instructions 8150394: aarch64: add support for 8.1 LSE CAS instructions 8150652: Remove unused code in AArch64 back end 8151340: aarch64: prefetch the destination word for write prior to ldxr/stxr loops. 8151502: optimize pd_disjoint_words and pd_conjoint_words 8151775: aarch64: add support for 8.1 LSE atomic operations 8152537: aarch64: Make use of CBZ and CBNZ when comparing unsigned values with zero. 8152840: aarch64: improve _unsafe_arraycopy stub routine 8153172: aarch64: hotspot crashes after the 8.1 LSE patch is merged 8153713: aarch64: improve short array clearing using store pair 8153797: aarch64: Add Arrays.fill stub code 8154413: AArch64: Better byte behaviour 8154537: AArch64: some integer rotate instructions are never emitted 8154739: AArch64: TemplateTable::fast_xaccess loads in wrong mode 8155015: Aarch64: bad assert in spill generation code 8155100: AArch64: Relax alignment requirement for byte_map_base 8155612: Aarch64: vector nodes need to support misaligned offset 8155617: aarch64: ClearArray does not use DC ZVA 8155627: Enable SA on AArch64 8155653: TestVectorUnalignedOffset.java not pushed with 8155612 8156731: aarch64: java/util/Arrays/Correct.java fails due to _generic_arraycopy stub routine 8157841: aarch64: prefetch ignores cache line size 8157906: aarch64: some more integer rotate instructions are never emitted 8158913: aarch64: SEGV running Spark terasort 8159052: aarch64: optimise unaligned copies in pd_disjoint_words and pd_conjoint_words 8159063: aarch64: optimise unaligned array copy long 8160748: [AArch64] Inconsistent types for ideal_reg 8161072: AArch64: jtreg compiler/uncommontrap/TestDeoptOOM failure 8161190: AArch64: Fix overflow in immediate cmp instruction 8164113: AArch64: follow-up the fix for 8161598 8165673: AArch64: Fix JNI floating point argument handling 8167200: AArch64: Broken stack pointer adjustment in interpreter 8167421: AArch64: in one core system, fatal error: Illegal threadstate encountered 8167595: AArch64: SEGV in stub code cipherBlockChaining_decryptAESCrypt 8168699: Validate special case invocations [AArch64 support] 8168888: Port 8160591: Improve internal array handling to AArch64. 8170100: AArch64: Crash in C1-compiled code accessing References 8170188: jtreg test compiler/types/TestMeetIncompatibleInterfaceArrays.java causes JVM crash 8170873: PPC64/aarch64: Poor StrictMath performance due to non-optimized compilation 8171537: aarch64: compiler/c1/Test6849574.java generates guarantee failure in C1 8172881: AArch64: assertion failure: the int pressure is incorrect 8173472: AArch64: C1 comparisons with null only use 32-bit instructions 8176100: [AArch64] [REDO][REDO] G1 Needs pre barrier on dereference of weak JNI handles 8177661: Correct ad rule output register types from iRegX to iRegXNoSp 8179954: AArch64: C1 and C2 volatile accesses are not sequentially consistent 8182581: aarch64: fix for crash caused by earlyret of compiled method 8183925: [AArch64] Decouple crash protection from watcher thread 8186325: AArch64: jtreg test hotspot/test/gc/g1/TestJNIWeakG1/TestJNIWeakG1.java SEGV 8187224: aarch64: some inconsistency between aarch64_ad.m4 and aarch64.ad 8189170: [AArch64] Add option to disable stack overflow checking in primordial thread for use with JNI_CreateJavaJVM 8193133: Assertion failure because 0xDEADDEAD can be in-heap 8195685: AArch64 port of 8174962: Better interface invocations 8195859: AArch64: vtableStubs gtest fails after 8174962 8196136: AArch64: Correct register use in patch for JDK-8194686 8196221: AArch64: Mistake in committed patch for JDK-8195859 8199712: [AArch64] Flight Recorder 8203481: Incorrect constraint for unextended_sp in frame:safe_for_sender 8203699: java/lang/invoke/SpecialInterfaceCall fails with SIGILL on aarch64 8205421: AARCH64: StubCodeMark should be placed after alignment 8206163: AArch64: incorrect code generation for StoreCM 8207345: Trampoline generation code reads from uninitialized memory 8207838: AArch64: Float registers incorrectly restored in JNI call 8209413: AArch64: NPE in clhsdb jstack command 8209414: [AArch64] method handle invocation does not respect JVMTI interp_only mode 8209415: Fix JVMTI test failure HS202 8209420: Track membars for volatile accesses so they can be properly optimized 8209835: Aarch64: elide barriers on all volatile operations 8210425: [AArch64] sharedRuntimeTrig/sharedRuntimeTrans compiled without optimization 8211064: [AArch64] Interpreter and c1 don't correctly handle jboolean results in native calls 8211233: MemBarNode::trailing_membar() and MemBarNode::leading_membar() need to handle dying subgraphs better 8213134: AArch64: vector shift failed with MaxVectorSize=8 8213419: [AArch64] C2 may hang in MulLNode::Ideal()/MulINode::Ideal() with gcc 8.2.1 8214857: "bad trailing membar" assert failure at memnode.cpp:3220 8215951: AArch64: jtreg test vmTestbase/nsk/jvmti/PopFrame/popframe005 segfaults 8215961: jdk/jfr/event/os/TestCPUInformation.java fails on AArch64 8216350: AArch64: monitor unlock fast path not called 8216989: CardTableBarrierSetAssembler::gen_write_ref_array_post_barrier() does not check for zero length on AARCH64 8217368: AArch64: C2 recursive stack locking optimisation not triggered 8218185: aarch64: missing LoadStore barrier in TemplateTable::putfield_or_static 8219011: Implement MacroAssembler::warn method on AArch64 8219635: aarch64: missing LoadStore barrier in TemplateTable::fast_storefield 8221220: AArch64: Add StoreStore membar explicitly for Volatile Writes in TemplateTable 8221658: aarch64: add necessary predicate for ubfx patterns 8224671: AArch64: mauve System.arraycopy test failure 8224828: aarch64: rflags is not correct after safepoint poll 8224851: AArch64: fix warnings and errors with Clang and GCC 8.3 8224880: AArch64: java/javac error with AllocatePrefetchDistance 8228400: Remove built-in AArch64 simulator 8228406: Superfluous change in chaitin.hpp 8228593: Revert explicit JDK 7 support additions 8228716: Revert InstanceKlass::print_on debug additions 8228718: Revert incorrect backport of JDK-8129757 to 8-aarch64 8228725: AArch64: Purge method call format support 8228747: Revert "unused" attribute from test_arraycopy_func 8228767: Revert ResourceMark additions 8228770: Revert development hsdis changes 8229123: Revert build fixes for aarch64/zero 8229124: Revert disassembler.cpp changes 8229145: Revert TemplateTable::bytecode() visibility change 8233839: aarch64: missing memory barrier in NewObjectArrayStub and NewTypeArrayStub 8237512: AArch64: aarch64TestHook leaks a BufferBlob 8246482: Build failures with +JFR -PCH 8247979: aarch64: missing side effect of killing flags for clearArray_reg_reg 8248219: aarch64: missing memory barrier in fast_storefield and fast_accessfield Reviewed-by: shade, aph
author andrew
date Mon, 01 Feb 2021 03:48:36 +0000
parents da2e98c027fd
children f79e943d15a7
line wrap: on
line source

/*
 * Copyright (c) 1997, 2018, Oracle and/or its affiliates. All rights reserved.
 * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
 *
 * This code is free software; you can redistribute it and/or modify it
 * under the terms of the GNU General Public License version 2 only, as
 * published by the Free Software Foundation.
 *
 * This code is distributed in the hope that it will be useful, but WITHOUT
 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
 * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
 * version 2 for more details (a copy is included in the LICENSE file that
 * accompanied this code).
 *
 * You should have received a copy of the GNU General Public License version
 * 2 along with this work; if not, write to the Free Software Foundation,
 * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
 *
 * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
 * or visit www.oracle.com if you need additional information or have any
 * questions.
 *
 */

#ifndef SHARE_VM_ASM_ASSEMBLER_HPP
#define SHARE_VM_ASM_ASSEMBLER_HPP

#include "asm/codeBuffer.hpp"
#include "code/oopRecorder.hpp"
#include "code/relocInfo.hpp"
#include "memory/allocation.hpp"
#include "utilities/debug.hpp"
#include "utilities/growableArray.hpp"
#include "utilities/top.hpp"

#ifdef TARGET_ARCH_x86
# include "register_x86.hpp"
# include "vm_version_x86.hpp"
#endif
#ifdef TARGET_ARCH_sparc
# include "register_sparc.hpp"
# include "vm_version_sparc.hpp"
#endif
#ifdef TARGET_ARCH_zero
# include "register_zero.hpp"
# include "vm_version_zero.hpp"
#endif
#ifdef TARGET_ARCH_arm
# include "register_arm.hpp"
# include "vm_version_arm.hpp"
#endif
#ifdef TARGET_ARCH_ppc
# include "register_ppc.hpp"
# include "vm_version_ppc.hpp"
#endif
#ifdef TARGET_ARCH_aarch64
# include "register_aarch64.hpp"
# include "vm_version_aarch64.hpp"
#endif

// This file contains platform-independent assembler declarations.

class MacroAssembler;
class AbstractAssembler;
class Label;

/**
 * Labels represent destinations for control transfer instructions.  Such
 * instructions can accept a Label as their target argument.  A Label is
 * bound to the current location in the code stream by calling the
 * MacroAssembler's 'bind' method, which in turn calls the Label's 'bind'
 * method.  A Label may be referenced by an instruction before it's bound
 * (i.e., 'forward referenced').  'bind' stores the current code offset
 * in the Label object.
 *
 * If an instruction references a bound Label, the offset field(s) within
 * the instruction are immediately filled in based on the Label's code
 * offset.  If an instruction references an unbound label, that
 * instruction is put on a list of instructions that must be patched
 * (i.e., 'resolved') when the Label is bound.
 *
 * 'bind' will call the platform-specific 'patch_instruction' method to
 * fill in the offset field(s) for each unresolved instruction (if there
 * are any).  'patch_instruction' lives in one of the
 * cpu/<arch>/vm/assembler_<arch>* files.
 *
 * Instead of using a linked list of unresolved instructions, a Label has
 * an array of unresolved instruction code offsets.  _patch_index
 * contains the total number of forward references.  If the Label's array
 * overflows (i.e., _patch_index grows larger than the array size), a
 * GrowableArray is allocated to hold the remaining offsets.  (The cache
 * size is 4 for now, which handles over 99.5% of the cases)
 *
 * Labels may only be used within a single CodeSection.  If you need
 * to create references between code sections, use explicit relocations.
 */
class Label VALUE_OBJ_CLASS_SPEC {
 private:
  enum { PatchCacheSize = 4 };

  // _loc encodes both the binding state (via its sign)
  // and the binding locator (via its value) of a label.
  //
  // _loc >= 0   bound label, loc() encodes the target (jump) position
  // _loc == -1  unbound label
  int _loc;

  // References to instructions that jump to this unresolved label.
  // These instructions need to be patched when the label is bound
  // using the platform-specific patchInstruction() method.
  //
  // To avoid having to allocate from the C-heap each time, we provide
  // a local cache and use the overflow only if we exceed the local cache
  int _patches[PatchCacheSize];
  int _patch_index;
  GrowableArray<int>* _patch_overflow;

  Label(const Label&) { ShouldNotReachHere(); }

 public:

  /**
   * After binding, be sure 'patch_instructions' is called later to link
   */
  void bind_loc(int loc) {
    assert(loc >= 0, "illegal locator");
    assert(_loc == -1, "already bound");
    _loc = loc;
  }
  void bind_loc(int pos, int sect) { bind_loc(CodeBuffer::locator(pos, sect)); }

#ifndef PRODUCT
  // Iterates over all unresolved instructions for printing
  void print_instructions(MacroAssembler* masm) const;
#endif // PRODUCT

  /**
   * Returns the position of the the Label in the code buffer
   * The position is a 'locator', which encodes both offset and section.
   */
  int loc() const {
    assert(_loc >= 0, "unbound label");
    return _loc;
  }
  int loc_pos()  const { return CodeBuffer::locator_pos(loc()); }
  int loc_sect() const { return CodeBuffer::locator_sect(loc()); }

  bool is_bound() const    { return _loc >=  0; }
  bool is_unbound() const  { return _loc == -1 && _patch_index > 0; }
  bool is_unused() const   { return _loc == -1 && _patch_index == 0; }

  /**
   * Adds a reference to an unresolved displacement instruction to
   * this unbound label
   *
   * @param cb         the code buffer being patched
   * @param branch_loc the locator of the branch instruction in the code buffer
   */
  void add_patch_at(CodeBuffer* cb, int branch_loc);

  /**
   * Iterate over the list of patches, resolving the instructions
   * Call patch_instruction on each 'branch_loc' value
   */
  void patch_instructions(MacroAssembler* masm);

  void init() {
    _loc = -1;
    _patch_index = 0;
    _patch_overflow = NULL;
  }

  Label() {
    init();
  }

  ~Label() {
    assert(is_bound() || is_unused(), "Label was never bound to a location, but it was used as a jmp target");
  }

  void reset() {
    init(); //leave _patch_overflow because it points to CodeBuffer.
  }
};

// A union type for code which has to assemble both constant and
// non-constant operands, when the distinction cannot be made
// statically.
class RegisterOrConstant VALUE_OBJ_CLASS_SPEC {
 private:
  Register _r;
  intptr_t _c;

 public:
  RegisterOrConstant(): _r(noreg), _c(0) {}
  RegisterOrConstant(Register r): _r(r), _c(0) {}
  RegisterOrConstant(intptr_t c): _r(noreg), _c(c) {}

  Register as_register() const { assert(is_register(),""); return _r; }
  intptr_t as_constant() const { assert(is_constant(),""); return _c; }

  Register register_or_noreg() const { return _r; }
  intptr_t constant_or_zero() const  { return _c; }

  bool is_register() const { return _r != noreg; }
  bool is_constant() const { return _r == noreg; }
};

// The Abstract Assembler: Pure assembler doing NO optimizations on the
// instruction level; i.e., what you write is what you get.
// The Assembler is generating code into a CodeBuffer.
class AbstractAssembler : public ResourceObj  {
  friend class Label;

 protected:
  CodeSection* _code_section;          // section within the code buffer
  OopRecorder* _oop_recorder;          // support for relocInfo::oop_type

 public:
  // Code emission & accessing
  address addr_at(int pos) const { return code_section()->start() + pos; }

 protected:
  // This routine is called with a label is used for an address.
  // Labels and displacements truck in offsets, but target must return a PC.
  address target(Label& L)             { return code_section()->target(L, pc()); }

  bool is8bit(int x) const             { return -0x80 <= x && x < 0x80; }
  bool isByte(int x) const             { return 0 <= x && x < 0x100; }
  bool isShiftCount(int x) const       { return 0 <= x && x < 32; }

  // Instruction boundaries (required when emitting relocatable values).
  class InstructionMark: public StackObj {
   private:
    AbstractAssembler* _assm;

   public:
    InstructionMark(AbstractAssembler* assm) : _assm(assm) {
      assert(assm->inst_mark() == NULL, "overlapping instructions");
      _assm->set_inst_mark();
    }
    ~InstructionMark() {
      _assm->clear_inst_mark();
    }
  };
  friend class InstructionMark;
#ifdef ASSERT
  // Make it return true on platforms which need to verify
  // instruction boundaries for some operations.
  static bool pd_check_instruction_mark();

  // Add delta to short branch distance to verify that it still fit into imm8.
  int _short_branch_delta;

  int  short_branch_delta() const { return _short_branch_delta; }
  void set_short_branch_delta()   { _short_branch_delta = 32; }
  void clear_short_branch_delta() { _short_branch_delta = 0; }

  class ShortBranchVerifier: public StackObj {
   private:
    AbstractAssembler* _assm;

   public:
    ShortBranchVerifier(AbstractAssembler* assm) : _assm(assm) {
      assert(assm->short_branch_delta() == 0, "overlapping instructions");
      _assm->set_short_branch_delta();
    }
    ~ShortBranchVerifier() {
      _assm->clear_short_branch_delta();
    }
  };
#else
  // Dummy in product.
  class ShortBranchVerifier: public StackObj {
   public:
    ShortBranchVerifier(AbstractAssembler* assm) {}
  };
#endif

 public:

  // Creation
  AbstractAssembler(CodeBuffer* code);

  // ensure buf contains all code (call this before using/copying the code)
  void flush();

  void emit_int8(   int8_t  x) { code_section()->emit_int8(   x); }
  void emit_int16(  int16_t x) { code_section()->emit_int16(  x); }
  void emit_int32(  int32_t x) { code_section()->emit_int32(  x); }
  void emit_int64(  int64_t x) { code_section()->emit_int64(  x); }

  void emit_float(  jfloat  x) { code_section()->emit_float(  x); }
  void emit_double( jdouble x) { code_section()->emit_double( x); }
  void emit_address(address x) { code_section()->emit_address(x); }

  // min and max values for signed immediate ranges
  static int min_simm(int nbits) { return -(intptr_t(1) << (nbits - 1))    ; }
  static int max_simm(int nbits) { return  (intptr_t(1) << (nbits - 1)) - 1; }

  // Define some:
  static int min_simm10() { return min_simm(10); }
  static int min_simm13() { return min_simm(13); }
  static int min_simm16() { return min_simm(16); }

  // Test if x is within signed immediate range for nbits
  static bool is_simm(intptr_t x, int nbits) { return min_simm(nbits) <= x && x <= max_simm(nbits); }

  // Define some:
  static bool is_simm5( intptr_t x) { return is_simm(x, 5 ); }
  static bool is_simm8( intptr_t x) { return is_simm(x, 8 ); }
  static bool is_simm10(intptr_t x) { return is_simm(x, 10); }
  static bool is_simm11(intptr_t x) { return is_simm(x, 11); }
  static bool is_simm12(intptr_t x) { return is_simm(x, 12); }
  static bool is_simm13(intptr_t x) { return is_simm(x, 13); }
  static bool is_simm16(intptr_t x) { return is_simm(x, 16); }
  static bool is_simm26(intptr_t x) { return is_simm(x, 26); }
  static bool is_simm32(intptr_t x) { return is_simm(x, 32); }

  // Accessors
  CodeSection*  code_section() const   { return _code_section; }
  CodeBuffer*   code()         const   { return code_section()->outer(); }
  int           sect()         const   { return code_section()->index(); }
  address       pc()           const   { return code_section()->end();   }
  int           offset()       const   { return code_section()->size();  }
  int           locator()      const   { return CodeBuffer::locator(offset(), sect()); }

  OopRecorder*  oop_recorder() const   { return _oop_recorder; }
  void      set_oop_recorder(OopRecorder* r) { _oop_recorder = r; }

  address       inst_mark() const { return code_section()->mark();       }
  void      set_inst_mark()       {        code_section()->set_mark();   }
  void    clear_inst_mark()       {        code_section()->clear_mark(); }

  // Constants in code
  void relocate(RelocationHolder const& rspec, int format = 0) {
    assert(!pd_check_instruction_mark()
        || inst_mark() == NULL || inst_mark() == code_section()->end(),
        "call relocate() between instructions");
    code_section()->relocate(code_section()->end(), rspec, format);
  }
  void relocate(   relocInfo::relocType rtype, int format = 0) {
    code_section()->relocate(code_section()->end(), rtype, format);
  }

  static int code_fill_byte();         // used to pad out odd-sized code buffers

  // Associate a comment with the current offset.  It will be printed
  // along with the disassembly when printing nmethods.  Currently
  // only supported in the instruction section of the code buffer.
  void block_comment(const char* comment);
  // Copy str to a buffer that has the same lifetime as the CodeBuffer
  const char* code_string(const char* str);

  // Label functions
  void bind(Label& L); // binds an unbound label L to the current code position

  // Move to a different section in the same code buffer.
  void set_code_section(CodeSection* cs);

  // Inform assembler when generating stub code and relocation info
  address    start_a_stub(int required_space);
  void       end_a_stub();
  // Ditto for constants.
  address    start_a_const(int required_space, int required_align = sizeof(double));
  void       end_a_const(CodeSection* cs);  // Pass the codesection to continue in (insts or stubs?).

  // constants support
  //
  // We must remember the code section (insts or stubs) in c1
  // so we can reset to the proper section in end_a_const().
  address long_constant(jlong c) {
    CodeSection* c1 = _code_section;
    address ptr = start_a_const(sizeof(c), sizeof(c));
    if (ptr != NULL) {
      emit_int64(c);
      end_a_const(c1);
    }
    return ptr;
  }
  address double_constant(jdouble c) {
    CodeSection* c1 = _code_section;
    address ptr = start_a_const(sizeof(c), sizeof(c));
    if (ptr != NULL) {
      emit_double(c);
      end_a_const(c1);
    }
    return ptr;
  }
  address float_constant(jfloat c) {
    CodeSection* c1 = _code_section;
    address ptr = start_a_const(sizeof(c), sizeof(c));
    if (ptr != NULL) {
      emit_float(c);
      end_a_const(c1);
    }
    return ptr;
  }
  address address_constant(address c) {
    CodeSection* c1 = _code_section;
    address ptr = start_a_const(sizeof(c), sizeof(c));
    if (ptr != NULL) {
      emit_address(c);
      end_a_const(c1);
    }
    return ptr;
  }
  address address_constant(address c, RelocationHolder const& rspec) {
    CodeSection* c1 = _code_section;
    address ptr = start_a_const(sizeof(c), sizeof(c));
    if (ptr != NULL) {
      relocate(rspec);
      emit_address(c);
      end_a_const(c1);
    }
    return ptr;
  }

  // Bootstrapping aid to cope with delayed determination of constants.
  // Returns a static address which will eventually contain the constant.
  // The value zero (NULL) stands instead of a constant which is still uncomputed.
  // Thus, the eventual value of the constant must not be zero.
  // This is fine, since this is designed for embedding object field
  // offsets in code which must be generated before the object class is loaded.
  // Field offsets are never zero, since an object's header (mark word)
  // is located at offset zero.
  RegisterOrConstant delayed_value(int(*value_fn)(), Register tmp, int offset = 0);
  RegisterOrConstant delayed_value(address(*value_fn)(), Register tmp, int offset = 0);
  virtual RegisterOrConstant delayed_value_impl(intptr_t* delayed_value_addr, Register tmp, int offset) = 0;
  // Last overloading is platform-dependent; look in assembler_<arch>.cpp.
  static intptr_t* delayed_value_addr(int(*constant_fn)());
  static intptr_t* delayed_value_addr(address(*constant_fn)());
  static void update_delayed_values();

  // Bang stack to trigger StackOverflowError at a safe location
  // implementation delegates to machine-specific bang_stack_with_offset
  void generate_stack_overflow_check( int frame_size_in_bytes );
  virtual void bang_stack_with_offset(int offset) = 0;


  /**
   * A platform-dependent method to patch a jump instruction that refers
   * to this label.
   *
   * @param branch the location of the instruction to patch
   * @param masm the assembler which generated the branch
   */
  void pd_patch_instruction(address branch, address target);

};

#ifdef TARGET_ARCH_x86
# include "assembler_x86.hpp"
#endif
#ifdef TARGET_ARCH_aarch64
# include "assembler_aarch64.hpp"
#endif
#ifdef TARGET_ARCH_sparc
# include "assembler_sparc.hpp"
#endif
#ifdef TARGET_ARCH_zero
# include "assembler_zero.hpp"
#endif
#ifdef TARGET_ARCH_arm
# include "assembler_arm.hpp"
#endif
#ifdef TARGET_ARCH_ppc
# include "assembler_ppc.hpp"
#endif


#endif // SHARE_VM_ASM_ASSEMBLER_HPP