Mercurial > hg > icedtea8-forest > hotspot
view src/cpu/aarch64/vm/c1_MacroAssembler_aarch64.cpp @ 10905:f57189b7648d
8257192: Integrate AArch64 JIT port into 8u
7009641: Don't fail VM when CodeCache is full
8073108: [AArch64] Use x86 and SPARC CPU instructions for GHASH acceleration
8130309: Need to bailout cleanly if creation of stubs fails when codecache is out of space (AArch64 changes)
8131779: AARCH64: add Montgomery multiply intrinsic
8132875: AArch64: Fix error introduced into AArch64 CodeCache by commit for 8130309
8135018: AARCH64: Missing memory barriers for CMS collector
8145320: Create unsafe_arraycopy and generic_arraycopy for AArch64
8148328: aarch64: redundant lsr instructions in stub code.
8148783: aarch64: SEGV running SpecJBB2013
8148948: aarch64: generate_copy_longs calls align() incorrectly
8149080: AArch64: Recognise disjoint array copy in stub code
8149365: aarch64: memory copy does not prefetch on backwards copy
8149907: aarch64: use load/store pair instructions in call_stub
8150038: aarch64: make use of CBZ and CBNZ when comparing narrow pointer with zero
8150045: arraycopy causes segfaults in SATB during garbage collection
8150082: aarch64: optimise small array copy
8150229: aarch64: pipeline class for several instructions is not set correctly
8150313: aarch64: optimise array copy using SIMD instructions
8150394: aarch64: add support for 8.1 LSE CAS instructions
8150652: Remove unused code in AArch64 back end
8151340: aarch64: prefetch the destination word for write prior to ldxr/stxr loops.
8151502: optimize pd_disjoint_words and pd_conjoint_words
8151775: aarch64: add support for 8.1 LSE atomic operations
8152537: aarch64: Make use of CBZ and CBNZ when comparing unsigned values with zero.
8152840: aarch64: improve _unsafe_arraycopy stub routine
8153172: aarch64: hotspot crashes after the 8.1 LSE patch is merged
8153713: aarch64: improve short array clearing using store pair
8153797: aarch64: Add Arrays.fill stub code
8154413: AArch64: Better byte behaviour
8154537: AArch64: some integer rotate instructions are never emitted
8154739: AArch64: TemplateTable::fast_xaccess loads in wrong mode
8155015: Aarch64: bad assert in spill generation code
8155100: AArch64: Relax alignment requirement for byte_map_base
8155612: Aarch64: vector nodes need to support misaligned offset
8155617: aarch64: ClearArray does not use DC ZVA
8155627: Enable SA on AArch64
8155653: TestVectorUnalignedOffset.java not pushed with 8155612
8156731: aarch64: java/util/Arrays/Correct.java fails due to _generic_arraycopy stub routine
8157841: aarch64: prefetch ignores cache line size
8157906: aarch64: some more integer rotate instructions are never emitted
8158913: aarch64: SEGV running Spark terasort
8159052: aarch64: optimise unaligned copies in pd_disjoint_words and pd_conjoint_words
8159063: aarch64: optimise unaligned array copy long
8160748: [AArch64] Inconsistent types for ideal_reg
8161072: AArch64: jtreg compiler/uncommontrap/TestDeoptOOM failure
8161190: AArch64: Fix overflow in immediate cmp instruction
8164113: AArch64: follow-up the fix for 8161598
8165673: AArch64: Fix JNI floating point argument handling
8167200: AArch64: Broken stack pointer adjustment in interpreter
8167421: AArch64: in one core system, fatal error: Illegal threadstate encountered
8167595: AArch64: SEGV in stub code cipherBlockChaining_decryptAESCrypt
8168699: Validate special case invocations [AArch64 support]
8168888: Port 8160591: Improve internal array handling to AArch64.
8170100: AArch64: Crash in C1-compiled code accessing References
8170188: jtreg test compiler/types/TestMeetIncompatibleInterfaceArrays.java causes JVM crash
8170873: PPC64/aarch64: Poor StrictMath performance due to non-optimized compilation
8171537: aarch64: compiler/c1/Test6849574.java generates guarantee failure in C1
8172881: AArch64: assertion failure: the int pressure is incorrect
8173472: AArch64: C1 comparisons with null only use 32-bit instructions
8176100: [AArch64] [REDO][REDO] G1 Needs pre barrier on dereference of weak JNI handles
8177661: Correct ad rule output register types from iRegX to iRegXNoSp
8179954: AArch64: C1 and C2 volatile accesses are not sequentially consistent
8182581: aarch64: fix for crash caused by earlyret of compiled method
8183925: [AArch64] Decouple crash protection from watcher thread
8186325: AArch64: jtreg test hotspot/test/gc/g1/TestJNIWeakG1/TestJNIWeakG1.java SEGV
8187224: aarch64: some inconsistency between aarch64_ad.m4 and aarch64.ad
8189170: [AArch64] Add option to disable stack overflow checking in primordial thread for use with JNI_CreateJavaJVM
8193133: Assertion failure because 0xDEADDEAD can be in-heap
8195685: AArch64 port of 8174962: Better interface invocations
8195859: AArch64: vtableStubs gtest fails after 8174962
8196136: AArch64: Correct register use in patch for JDK-8194686
8196221: AArch64: Mistake in committed patch for JDK-8195859
8199712: [AArch64] Flight Recorder
8203481: Incorrect constraint for unextended_sp in frame:safe_for_sender
8203699: java/lang/invoke/SpecialInterfaceCall fails with SIGILL on aarch64
8205421: AARCH64: StubCodeMark should be placed after alignment
8206163: AArch64: incorrect code generation for StoreCM
8207345: Trampoline generation code reads from uninitialized memory
8207838: AArch64: Float registers incorrectly restored in JNI call
8209413: AArch64: NPE in clhsdb jstack command
8209414: [AArch64] method handle invocation does not respect JVMTI interp_only mode
8209415: Fix JVMTI test failure HS202
8209420: Track membars for volatile accesses so they can be properly optimized
8209835: Aarch64: elide barriers on all volatile operations
8210425: [AArch64] sharedRuntimeTrig/sharedRuntimeTrans compiled without optimization
8211064: [AArch64] Interpreter and c1 don't correctly handle jboolean results in native calls
8211233: MemBarNode::trailing_membar() and MemBarNode::leading_membar() need to handle dying subgraphs better
8213134: AArch64: vector shift failed with MaxVectorSize=8
8213419: [AArch64] C2 may hang in MulLNode::Ideal()/MulINode::Ideal() with gcc 8.2.1
8214857: "bad trailing membar" assert failure at memnode.cpp:3220
8215951: AArch64: jtreg test vmTestbase/nsk/jvmti/PopFrame/popframe005 segfaults
8215961: jdk/jfr/event/os/TestCPUInformation.java fails on AArch64
8216350: AArch64: monitor unlock fast path not called
8216989: CardTableBarrierSetAssembler::gen_write_ref_array_post_barrier() does not check for zero length on AARCH64
8217368: AArch64: C2 recursive stack locking optimisation not triggered
8218185: aarch64: missing LoadStore barrier in TemplateTable::putfield_or_static
8219011: Implement MacroAssembler::warn method on AArch64
8219635: aarch64: missing LoadStore barrier in TemplateTable::fast_storefield
8221220: AArch64: Add StoreStore membar explicitly for Volatile Writes in TemplateTable
8221658: aarch64: add necessary predicate for ubfx patterns
8224671: AArch64: mauve System.arraycopy test failure
8224828: aarch64: rflags is not correct after safepoint poll
8224851: AArch64: fix warnings and errors with Clang and GCC 8.3
8224880: AArch64: java/javac error with AllocatePrefetchDistance
8228400: Remove built-in AArch64 simulator
8228406: Superfluous change in chaitin.hpp
8228593: Revert explicit JDK 7 support additions
8228716: Revert InstanceKlass::print_on debug additions
8228718: Revert incorrect backport of JDK-8129757 to 8-aarch64
8228725: AArch64: Purge method call format support
8228747: Revert "unused" attribute from test_arraycopy_func
8228767: Revert ResourceMark additions
8228770: Revert development hsdis changes
8229123: Revert build fixes for aarch64/zero
8229124: Revert disassembler.cpp changes
8229145: Revert TemplateTable::bytecode() visibility change
8233839: aarch64: missing memory barrier in NewObjectArrayStub and NewTypeArrayStub
8237512: AArch64: aarch64TestHook leaks a BufferBlob
8246482: Build failures with +JFR -PCH
8247979: aarch64: missing side effect of killing flags for clearArray_reg_reg
8248219: aarch64: missing memory barrier in fast_storefield and fast_accessfield
Reviewed-by: shade, aph
author | andrew |
---|---|
date | Mon, 01 Feb 2021 03:48:36 +0000 |
parents | |
children | f79e943d15a7 |
line wrap: on
line source
/* * Copyright (c) 2013, Red Hat Inc. * Copyright (c) 1999, 2011, Oracle and/or its affiliates. * All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 only, as * published by the Free Software Foundation. * * This code is distributed in the hope that it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License * version 2 for more details (a copy is included in the LICENSE file that * accompanied this code). * * You should have received a copy of the GNU General Public License version * 2 along with this work; if not, write to the Free Software Foundation, * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. * * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA * or visit www.oracle.com if you need additional information or have any * questions. * */ #include "precompiled.hpp" #include "c1/c1_MacroAssembler.hpp" #include "c1/c1_Runtime1.hpp" #include "classfile/systemDictionary.hpp" #include "gc_interface/collectedHeap.hpp" #include "interpreter/interpreter.hpp" #include "oops/arrayOop.hpp" #include "oops/markOop.hpp" #include "runtime/basicLock.hpp" #include "runtime/biasedLocking.hpp" #include "runtime/os.hpp" #include "runtime/stubRoutines.hpp" void C1_MacroAssembler::float_cmp(bool is_float, int unordered_result, FloatRegister f0, FloatRegister f1, Register result) { Label done; if (is_float) { fcmps(f0, f1); } else { fcmpd(f0, f1); } if (unordered_result < 0) { // we want -1 for unordered or less than, 0 for equal and 1 for // greater than. cset(result, NE); // Not equal or unordered cneg(result, result, LT); // Less than or unordered } else { // we want -1 for less than, 0 for equal and 1 for unordered or // greater than. cset(result, NE); // Not equal or unordered cneg(result, result, LO); // Less than } } int C1_MacroAssembler::lock_object(Register hdr, Register obj, Register disp_hdr, Register scratch, Label& slow_case) { const int aligned_mask = BytesPerWord -1; const int hdr_offset = oopDesc::mark_offset_in_bytes(); assert(hdr != obj && hdr != disp_hdr && obj != disp_hdr, "registers must be different"); Label done, fail; int null_check_offset = -1; verify_oop(obj); // save object being locked into the BasicObjectLock str(obj, Address(disp_hdr, BasicObjectLock::obj_offset_in_bytes())); if (UseBiasedLocking) { assert(scratch != noreg, "should have scratch register at this point"); null_check_offset = biased_locking_enter(disp_hdr, obj, hdr, scratch, false, done, &slow_case); } else { null_check_offset = offset(); } // Load object header ldr(hdr, Address(obj, hdr_offset)); // and mark it as unlocked orr(hdr, hdr, markOopDesc::unlocked_value); // save unlocked object header into the displaced header location on the stack str(hdr, Address(disp_hdr, 0)); // test if object header is still the same (i.e. unlocked), and if so, store the // displaced header address in the object header - if it is not the same, get the // object header instead lea(rscratch2, Address(obj, hdr_offset)); cmpxchgptr(hdr, disp_hdr, rscratch2, rscratch1, done, /*fallthough*/NULL); // if the object header was the same, we're done // if the object header was not the same, it is now in the hdr register // => test if it is a stack pointer into the same stack (recursive locking), i.e.: // // 1) (hdr & aligned_mask) == 0 // 2) sp <= hdr // 3) hdr <= sp + page_size // // these 3 tests can be done by evaluating the following expression: // // (hdr - sp) & (aligned_mask - page_size) // // assuming both the stack pointer and page_size have their least // significant 2 bits cleared and page_size is a power of 2 mov(rscratch1, sp); sub(hdr, hdr, rscratch1); ands(hdr, hdr, aligned_mask - os::vm_page_size()); // for recursive locking, the result is zero => save it in the displaced header // location (NULL in the displaced hdr location indicates recursive locking) str(hdr, Address(disp_hdr, 0)); // otherwise we don't care about the result and handle locking via runtime call cbnz(hdr, slow_case); // done bind(done); if (PrintBiasedLockingStatistics) { lea(rscratch2, ExternalAddress((address)BiasedLocking::fast_path_entry_count_addr())); addmw(Address(rscratch2, 0), 1, rscratch1); } return null_check_offset; } void C1_MacroAssembler::unlock_object(Register hdr, Register obj, Register disp_hdr, Label& slow_case) { const int aligned_mask = BytesPerWord -1; const int hdr_offset = oopDesc::mark_offset_in_bytes(); assert(hdr != obj && hdr != disp_hdr && obj != disp_hdr, "registers must be different"); Label done; if (UseBiasedLocking) { // load object ldr(obj, Address(disp_hdr, BasicObjectLock::obj_offset_in_bytes())); biased_locking_exit(obj, hdr, done); } // load displaced header ldr(hdr, Address(disp_hdr, 0)); // if the loaded hdr is NULL we had recursive locking // if we had recursive locking, we are done cbz(hdr, done); if (!UseBiasedLocking) { // load object ldr(obj, Address(disp_hdr, BasicObjectLock::obj_offset_in_bytes())); } verify_oop(obj); // test if object header is pointing to the displaced header, and if so, restore // the displaced header in the object - if the object header is not pointing to // the displaced header, get the object header instead // if the object header was not pointing to the displaced header, // we do unlocking via runtime call if (hdr_offset) { lea(rscratch1, Address(obj, hdr_offset)); cmpxchgptr(disp_hdr, hdr, rscratch1, rscratch2, done, &slow_case); } else { cmpxchgptr(disp_hdr, hdr, obj, rscratch2, done, &slow_case); } // done bind(done); } // Defines obj, preserves var_size_in_bytes void C1_MacroAssembler::try_allocate(Register obj, Register var_size_in_bytes, int con_size_in_bytes, Register t1, Register t2, Label& slow_case) { if (UseTLAB) { tlab_allocate(obj, var_size_in_bytes, con_size_in_bytes, t1, t2, slow_case); } else { eden_allocate(obj, var_size_in_bytes, con_size_in_bytes, t1, slow_case); incr_allocated_bytes(noreg, var_size_in_bytes, con_size_in_bytes, t1); } } void C1_MacroAssembler::initialize_header(Register obj, Register klass, Register len, Register t1, Register t2) { assert_different_registers(obj, klass, len); if (UseBiasedLocking && !len->is_valid()) { assert_different_registers(obj, klass, len, t1, t2); ldr(t1, Address(klass, Klass::prototype_header_offset())); } else { // This assumes that all prototype bits fit in an int32_t mov(t1, (int32_t)(intptr_t)markOopDesc::prototype()); } str(t1, Address(obj, oopDesc::mark_offset_in_bytes())); if (UseCompressedClassPointers) { // Take care not to kill klass encode_klass_not_null(t1, klass); strw(t1, Address(obj, oopDesc::klass_offset_in_bytes())); } else { str(klass, Address(obj, oopDesc::klass_offset_in_bytes())); } if (len->is_valid()) { strw(len, Address(obj, arrayOopDesc::length_offset_in_bytes())); } else if (UseCompressedClassPointers) { store_klass_gap(obj, zr); } } // Zero words; len is in bytes // Destroys all registers except addr // len must be a nonzero multiple of wordSize void C1_MacroAssembler::zero_memory(Register addr, Register len, Register t1) { assert_different_registers(addr, len, t1, rscratch1, rscratch2); #ifdef ASSERT { Label L; tst(len, BytesPerWord - 1); br(Assembler::EQ, L); stop("len is not a multiple of BytesPerWord"); bind(L); } #endif #ifndef PRODUCT block_comment("zero memory"); #endif Label loop; Label entry; // Algorithm: // // scratch1 = cnt & 7; // cnt -= scratch1; // p += scratch1; // switch (scratch1) { // do { // cnt -= 8; // p[-8] = 0; // case 7: // p[-7] = 0; // case 6: // p[-6] = 0; // // ... // case 1: // p[-1] = 0; // case 0: // p += 8; // } while (cnt); // } const int unroll = 8; // Number of str(zr) instructions we'll unroll lsr(len, len, LogBytesPerWord); andr(rscratch1, len, unroll - 1); // tmp1 = cnt % unroll sub(len, len, rscratch1); // cnt -= unroll // t1 always points to the end of the region we're about to zero add(t1, addr, rscratch1, Assembler::LSL, LogBytesPerWord); adr(rscratch2, entry); sub(rscratch2, rscratch2, rscratch1, Assembler::LSL, 2); br(rscratch2); bind(loop); sub(len, len, unroll); for (int i = -unroll; i < 0; i++) str(zr, Address(t1, i * wordSize)); bind(entry); add(t1, t1, unroll * wordSize); cbnz(len, loop); } // preserves obj, destroys len_in_bytes void C1_MacroAssembler::initialize_body(Register obj, Register len_in_bytes, int hdr_size_in_bytes, Register t1) { Label done; assert(obj != len_in_bytes && obj != t1 && t1 != len_in_bytes, "registers must be different"); assert((hdr_size_in_bytes & (BytesPerWord - 1)) == 0, "header size is not a multiple of BytesPerWord"); Register index = len_in_bytes; // index is positive and ptr sized subs(index, index, hdr_size_in_bytes); br(Assembler::EQ, done); // note: for the remaining code to work, index must be a multiple of BytesPerWord #ifdef ASSERT { Label L; tst(index, BytesPerWord - 1); br(Assembler::EQ, L); stop("index is not a multiple of BytesPerWord"); bind(L); } #endif // Preserve obj if (hdr_size_in_bytes) add(obj, obj, hdr_size_in_bytes); zero_memory(obj, index, t1); if (hdr_size_in_bytes) sub(obj, obj, hdr_size_in_bytes); // done bind(done); } void C1_MacroAssembler::allocate_object(Register obj, Register t1, Register t2, int header_size, int object_size, Register klass, Label& slow_case) { assert_different_registers(obj, t1, t2); // XXX really? assert(header_size >= 0 && object_size >= header_size, "illegal sizes"); try_allocate(obj, noreg, object_size * BytesPerWord, t1, t2, slow_case); initialize_object(obj, klass, noreg, object_size * HeapWordSize, t1, t2); } void C1_MacroAssembler::initialize_object(Register obj, Register klass, Register var_size_in_bytes, int con_size_in_bytes, Register t1, Register t2) { assert((con_size_in_bytes & MinObjAlignmentInBytesMask) == 0, "con_size_in_bytes is not multiple of alignment"); const int hdr_size_in_bytes = instanceOopDesc::header_size() * HeapWordSize; initialize_header(obj, klass, noreg, t1, t2); // clear rest of allocated space const Register index = t2; const int threshold = 16 * BytesPerWord; // approximate break even point for code size (see comments below) if (var_size_in_bytes != noreg) { mov(index, var_size_in_bytes); initialize_body(obj, index, hdr_size_in_bytes, t1); } else if (con_size_in_bytes <= threshold) { // use explicit null stores int i = hdr_size_in_bytes; if (i < con_size_in_bytes && (con_size_in_bytes % (2 * BytesPerWord))) { str(zr, Address(obj, i)); i += BytesPerWord; } for (; i < con_size_in_bytes; i += 2 * BytesPerWord) stp(zr, zr, Address(obj, i)); } else if (con_size_in_bytes > hdr_size_in_bytes) { block_comment("zero memory"); // use loop to null out the fields int words = (con_size_in_bytes - hdr_size_in_bytes) / BytesPerWord; mov(index, words / 8); const int unroll = 8; // Number of str(zr) instructions we'll unroll int remainder = words % unroll; lea(rscratch1, Address(obj, hdr_size_in_bytes + remainder * BytesPerWord)); Label entry_point, loop; b(entry_point); bind(loop); sub(index, index, 1); for (int i = -unroll; i < 0; i++) { if (-i == remainder) bind(entry_point); str(zr, Address(rscratch1, i * wordSize)); } if (remainder == 0) bind(entry_point); add(rscratch1, rscratch1, unroll * wordSize); cbnz(index, loop); } membar(StoreStore); if (CURRENT_ENV->dtrace_alloc_probes()) { assert(obj == r0, "must be"); far_call(RuntimeAddress(Runtime1::entry_for(Runtime1::dtrace_object_alloc_id))); } verify_oop(obj); } void C1_MacroAssembler::allocate_array(Register obj, Register len, Register t1, Register t2, int header_size, int f, Register klass, Label& slow_case) { assert_different_registers(obj, len, t1, t2, klass); // determine alignment mask assert(!(BytesPerWord & 1), "must be a multiple of 2 for masking code to work"); // check for negative or excessive length mov(rscratch1, (int32_t)max_array_allocation_length); cmp(len, rscratch1); br(Assembler::HS, slow_case); const Register arr_size = t2; // okay to be the same // align object end mov(arr_size, (int32_t)header_size * BytesPerWord + MinObjAlignmentInBytesMask); add(arr_size, arr_size, len, ext::uxtw, f); andr(arr_size, arr_size, ~MinObjAlignmentInBytesMask); try_allocate(obj, arr_size, 0, t1, t2, slow_case); initialize_header(obj, klass, len, t1, t2); // clear rest of allocated space const Register len_zero = len; initialize_body(obj, arr_size, header_size * BytesPerWord, len_zero); membar(StoreStore); if (CURRENT_ENV->dtrace_alloc_probes()) { assert(obj == r0, "must be"); far_call(RuntimeAddress(Runtime1::entry_for(Runtime1::dtrace_object_alloc_id))); } verify_oop(obj); } void C1_MacroAssembler::inline_cache_check(Register receiver, Register iCache) { verify_oop(receiver); // explicit NULL check not needed since load from [klass_offset] causes a trap // check against inline cache assert(!MacroAssembler::needs_explicit_null_check(oopDesc::klass_offset_in_bytes()), "must add explicit null check"); cmp_klass(receiver, iCache, rscratch1); } void C1_MacroAssembler::build_frame(int framesize, int bang_size_in_bytes) { // If we have to make this method not-entrant we'll overwrite its // first instruction with a jump. For this action to be legal we // must ensure that this first instruction is a B, BL, NOP, BKPT, // SVC, HVC, or SMC. Make it a NOP. nop(); assert(bang_size_in_bytes >= framesize, "stack bang size incorrect"); // Make sure there is enough stack space for this method's activation. // Note that we do this before doing an enter(). generate_stack_overflow_check(bang_size_in_bytes); MacroAssembler::build_frame(framesize + 2 * wordSize); } void C1_MacroAssembler::remove_frame(int framesize) { MacroAssembler::remove_frame(framesize + 2 * wordSize); } void C1_MacroAssembler::verified_entry() { } #ifndef PRODUCT void C1_MacroAssembler::verify_stack_oop(int stack_offset) { if (!VerifyOops) return; verify_oop_addr(Address(sp, stack_offset), "oop"); } void C1_MacroAssembler::verify_not_null_oop(Register r) { if (!VerifyOops) return; Label not_null; cbnz(r, not_null); stop("non-null oop required"); bind(not_null); verify_oop(r); } void C1_MacroAssembler::invalidate_registers(bool inv_r0, bool inv_r19, bool inv_r2, bool inv_r3, bool inv_r4, bool inv_r5) { #ifdef ASSERT static int nn; if (inv_r0) mov(r0, 0xDEAD); if (inv_r19) mov(r19, 0xDEAD); if (inv_r2) mov(r2, nn++); if (inv_r3) mov(r3, 0xDEAD); if (inv_r4) mov(r4, 0xDEAD); if (inv_r5) mov(r5, 0xDEAD); #endif } #endif // ifndef PRODUCT