Mercurial > hg > icedtea8-forest > hotspot
view src/share/vm/runtime/stubRoutines.hpp @ 10905:f57189b7648d
8257192: Integrate AArch64 JIT port into 8u
7009641: Don't fail VM when CodeCache is full
8073108: [AArch64] Use x86 and SPARC CPU instructions for GHASH acceleration
8130309: Need to bailout cleanly if creation of stubs fails when codecache is out of space (AArch64 changes)
8131779: AARCH64: add Montgomery multiply intrinsic
8132875: AArch64: Fix error introduced into AArch64 CodeCache by commit for 8130309
8135018: AARCH64: Missing memory barriers for CMS collector
8145320: Create unsafe_arraycopy and generic_arraycopy for AArch64
8148328: aarch64: redundant lsr instructions in stub code.
8148783: aarch64: SEGV running SpecJBB2013
8148948: aarch64: generate_copy_longs calls align() incorrectly
8149080: AArch64: Recognise disjoint array copy in stub code
8149365: aarch64: memory copy does not prefetch on backwards copy
8149907: aarch64: use load/store pair instructions in call_stub
8150038: aarch64: make use of CBZ and CBNZ when comparing narrow pointer with zero
8150045: arraycopy causes segfaults in SATB during garbage collection
8150082: aarch64: optimise small array copy
8150229: aarch64: pipeline class for several instructions is not set correctly
8150313: aarch64: optimise array copy using SIMD instructions
8150394: aarch64: add support for 8.1 LSE CAS instructions
8150652: Remove unused code in AArch64 back end
8151340: aarch64: prefetch the destination word for write prior to ldxr/stxr loops.
8151502: optimize pd_disjoint_words and pd_conjoint_words
8151775: aarch64: add support for 8.1 LSE atomic operations
8152537: aarch64: Make use of CBZ and CBNZ when comparing unsigned values with zero.
8152840: aarch64: improve _unsafe_arraycopy stub routine
8153172: aarch64: hotspot crashes after the 8.1 LSE patch is merged
8153713: aarch64: improve short array clearing using store pair
8153797: aarch64: Add Arrays.fill stub code
8154413: AArch64: Better byte behaviour
8154537: AArch64: some integer rotate instructions are never emitted
8154739: AArch64: TemplateTable::fast_xaccess loads in wrong mode
8155015: Aarch64: bad assert in spill generation code
8155100: AArch64: Relax alignment requirement for byte_map_base
8155612: Aarch64: vector nodes need to support misaligned offset
8155617: aarch64: ClearArray does not use DC ZVA
8155627: Enable SA on AArch64
8155653: TestVectorUnalignedOffset.java not pushed with 8155612
8156731: aarch64: java/util/Arrays/Correct.java fails due to _generic_arraycopy stub routine
8157841: aarch64: prefetch ignores cache line size
8157906: aarch64: some more integer rotate instructions are never emitted
8158913: aarch64: SEGV running Spark terasort
8159052: aarch64: optimise unaligned copies in pd_disjoint_words and pd_conjoint_words
8159063: aarch64: optimise unaligned array copy long
8160748: [AArch64] Inconsistent types for ideal_reg
8161072: AArch64: jtreg compiler/uncommontrap/TestDeoptOOM failure
8161190: AArch64: Fix overflow in immediate cmp instruction
8164113: AArch64: follow-up the fix for 8161598
8165673: AArch64: Fix JNI floating point argument handling
8167200: AArch64: Broken stack pointer adjustment in interpreter
8167421: AArch64: in one core system, fatal error: Illegal threadstate encountered
8167595: AArch64: SEGV in stub code cipherBlockChaining_decryptAESCrypt
8168699: Validate special case invocations [AArch64 support]
8168888: Port 8160591: Improve internal array handling to AArch64.
8170100: AArch64: Crash in C1-compiled code accessing References
8170188: jtreg test compiler/types/TestMeetIncompatibleInterfaceArrays.java causes JVM crash
8170873: PPC64/aarch64: Poor StrictMath performance due to non-optimized compilation
8171537: aarch64: compiler/c1/Test6849574.java generates guarantee failure in C1
8172881: AArch64: assertion failure: the int pressure is incorrect
8173472: AArch64: C1 comparisons with null only use 32-bit instructions
8176100: [AArch64] [REDO][REDO] G1 Needs pre barrier on dereference of weak JNI handles
8177661: Correct ad rule output register types from iRegX to iRegXNoSp
8179954: AArch64: C1 and C2 volatile accesses are not sequentially consistent
8182581: aarch64: fix for crash caused by earlyret of compiled method
8183925: [AArch64] Decouple crash protection from watcher thread
8186325: AArch64: jtreg test hotspot/test/gc/g1/TestJNIWeakG1/TestJNIWeakG1.java SEGV
8187224: aarch64: some inconsistency between aarch64_ad.m4 and aarch64.ad
8189170: [AArch64] Add option to disable stack overflow checking in primordial thread for use with JNI_CreateJavaJVM
8193133: Assertion failure because 0xDEADDEAD can be in-heap
8195685: AArch64 port of 8174962: Better interface invocations
8195859: AArch64: vtableStubs gtest fails after 8174962
8196136: AArch64: Correct register use in patch for JDK-8194686
8196221: AArch64: Mistake in committed patch for JDK-8195859
8199712: [AArch64] Flight Recorder
8203481: Incorrect constraint for unextended_sp in frame:safe_for_sender
8203699: java/lang/invoke/SpecialInterfaceCall fails with SIGILL on aarch64
8205421: AARCH64: StubCodeMark should be placed after alignment
8206163: AArch64: incorrect code generation for StoreCM
8207345: Trampoline generation code reads from uninitialized memory
8207838: AArch64: Float registers incorrectly restored in JNI call
8209413: AArch64: NPE in clhsdb jstack command
8209414: [AArch64] method handle invocation does not respect JVMTI interp_only mode
8209415: Fix JVMTI test failure HS202
8209420: Track membars for volatile accesses so they can be properly optimized
8209835: Aarch64: elide barriers on all volatile operations
8210425: [AArch64] sharedRuntimeTrig/sharedRuntimeTrans compiled without optimization
8211064: [AArch64] Interpreter and c1 don't correctly handle jboolean results in native calls
8211233: MemBarNode::trailing_membar() and MemBarNode::leading_membar() need to handle dying subgraphs better
8213134: AArch64: vector shift failed with MaxVectorSize=8
8213419: [AArch64] C2 may hang in MulLNode::Ideal()/MulINode::Ideal() with gcc 8.2.1
8214857: "bad trailing membar" assert failure at memnode.cpp:3220
8215951: AArch64: jtreg test vmTestbase/nsk/jvmti/PopFrame/popframe005 segfaults
8215961: jdk/jfr/event/os/TestCPUInformation.java fails on AArch64
8216350: AArch64: monitor unlock fast path not called
8216989: CardTableBarrierSetAssembler::gen_write_ref_array_post_barrier() does not check for zero length on AARCH64
8217368: AArch64: C2 recursive stack locking optimisation not triggered
8218185: aarch64: missing LoadStore barrier in TemplateTable::putfield_or_static
8219011: Implement MacroAssembler::warn method on AArch64
8219635: aarch64: missing LoadStore barrier in TemplateTable::fast_storefield
8221220: AArch64: Add StoreStore membar explicitly for Volatile Writes in TemplateTable
8221658: aarch64: add necessary predicate for ubfx patterns
8224671: AArch64: mauve System.arraycopy test failure
8224828: aarch64: rflags is not correct after safepoint poll
8224851: AArch64: fix warnings and errors with Clang and GCC 8.3
8224880: AArch64: java/javac error with AllocatePrefetchDistance
8228400: Remove built-in AArch64 simulator
8228406: Superfluous change in chaitin.hpp
8228593: Revert explicit JDK 7 support additions
8228716: Revert InstanceKlass::print_on debug additions
8228718: Revert incorrect backport of JDK-8129757 to 8-aarch64
8228725: AArch64: Purge method call format support
8228747: Revert "unused" attribute from test_arraycopy_func
8228767: Revert ResourceMark additions
8228770: Revert development hsdis changes
8229123: Revert build fixes for aarch64/zero
8229124: Revert disassembler.cpp changes
8229145: Revert TemplateTable::bytecode() visibility change
8233839: aarch64: missing memory barrier in NewObjectArrayStub and NewTypeArrayStub
8237512: AArch64: aarch64TestHook leaks a BufferBlob
8246482: Build failures with +JFR -PCH
8247979: aarch64: missing side effect of killing flags for clearArray_reg_reg
8248219: aarch64: missing memory barrier in fast_storefield and fast_accessfield
Reviewed-by: shade, aph
author | andrew |
---|---|
date | Mon, 01 Feb 2021 03:48:36 +0000 |
parents | 44ef77ad417c |
children | f79e943d15a7 |
line wrap: on
line source
/* * Copyright (c) 1997, 2015, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 only, as * published by the Free Software Foundation. * * This code is distributed in the hope that it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License * version 2 for more details (a copy is included in the LICENSE file that * accompanied this code). * * You should have received a copy of the GNU General Public License version * 2 along with this work; if not, write to the Free Software Foundation, * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. * * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA * or visit www.oracle.com if you need additional information or have any * questions. * */ #ifndef SHARE_VM_RUNTIME_STUBROUTINES_HPP #define SHARE_VM_RUNTIME_STUBROUTINES_HPP #include "code/codeBlob.hpp" #include "memory/allocation.hpp" #include "runtime/frame.hpp" #include "runtime/mutexLocker.hpp" #include "runtime/stubCodeGenerator.hpp" #include "utilities/top.hpp" #ifdef TARGET_ARCH_x86 # include "nativeInst_x86.hpp" #endif #ifdef TARGET_ARCH_aarch64 # include "nativeInst_aarch64.hpp" #endif #ifdef TARGET_ARCH_sparc # include "nativeInst_sparc.hpp" #endif #ifdef TARGET_ARCH_zero # include "nativeInst_zero.hpp" #endif #ifdef TARGET_ARCH_arm # include "nativeInst_arm.hpp" #endif #ifdef TARGET_ARCH_ppc # include "nativeInst_ppc.hpp" #endif // StubRoutines provides entry points to assembly routines used by // compiled code and the run-time system. Platform-specific entry // points are defined in the platform-specific inner class. // // Class scheme: // // platform-independent platform-dependent // // stubRoutines.hpp <-- included -- stubRoutines_<arch>.hpp // ^ ^ // | | // implements implements // | | // | | // stubRoutines.cpp stubRoutines_<arch>.cpp // stubRoutines_<os_family>.cpp stubGenerator_<arch>.cpp // stubRoutines_<os_arch>.cpp // // Note 1: The important thing is a clean decoupling between stub // entry points (interfacing to the whole vm; i.e., 1-to-n // relationship) and stub generators (interfacing only to // the entry points implementation; i.e., 1-to-1 relationship). // This significantly simplifies changes in the generator // structure since the rest of the vm is not affected. // // Note 2: stubGenerator_<arch>.cpp contains a minimal portion of // machine-independent code; namely the generator calls of // the generator functions that are used platform-independently. // However, it comes with the advantage of having a 1-file // implementation of the generator. It should be fairly easy // to change, should it become a problem later. // // Scheme for adding a new entry point: // // 1. determine if it's a platform-dependent or independent entry point // a) if platform independent: make subsequent changes in the independent files // b) if platform dependent: make subsequent changes in the dependent files // 2. add a private instance variable holding the entry point address // 3. add a public accessor function to the instance variable // 4. implement the corresponding generator function in the platform-dependent // stubGenerator_<arch>.cpp file and call the function in generate_all() of that file class StubRoutines: AllStatic { public: enum platform_independent_constants { max_size_of_parameters = 256 // max. parameter size supported by megamorphic lookups }; // Dependencies friend class StubGenerator; #if defined STUBROUTINES_MD_HPP # include STUBROUTINES_MD_HPP #elif defined TARGET_ARCH_MODEL_x86_32 # include "stubRoutines_x86_32.hpp" #elif defined TARGET_ARCH_MODEL_x86_64 # include "stubRoutines_x86_64.hpp" #elif defined TARGET_ARCH_MODEL_aarch64 # include "stubRoutines_aarch64.hpp" #elif defined TARGET_ARCH_MODEL_sparc # include "stubRoutines_sparc.hpp" #elif defined TARGET_ARCH_MODEL_zero # include "stubRoutines_zero.hpp" #elif defined TARGET_ARCH_MODEL_ppc_64 # include "stubRoutines_ppc_64.hpp" #endif static jint _verify_oop_count; static address _verify_oop_subroutine_entry; static address _call_stub_return_address; // the return PC, when returning to a call stub static address _call_stub_entry; static address _forward_exception_entry; static address _catch_exception_entry; static address _throw_AbstractMethodError_entry; static address _throw_IncompatibleClassChangeError_entry; static address _throw_NullPointerException_at_call_entry; static address _throw_StackOverflowError_entry; static address _handler_for_unsafe_access_entry; static address _atomic_xchg_entry; static address _atomic_xchg_ptr_entry; static address _atomic_store_entry; static address _atomic_store_ptr_entry; static address _atomic_cmpxchg_entry; static address _atomic_cmpxchg_ptr_entry; static address _atomic_cmpxchg_long_entry; static address _atomic_add_entry; static address _atomic_add_ptr_entry; static address _fence_entry; static address _d2i_wrapper; static address _d2l_wrapper; static jint _fpu_cntrl_wrd_std; static jint _fpu_cntrl_wrd_24; static jint _fpu_cntrl_wrd_64; static jint _fpu_cntrl_wrd_trunc; static jint _mxcsr_std; static jint _fpu_subnormal_bias1[3]; static jint _fpu_subnormal_bias2[3]; static BufferBlob* _code1; // code buffer for initial routines static BufferBlob* _code2; // code buffer for all other routines // Leaf routines which implement arraycopy and their addresses // arraycopy operands aligned on element type boundary static address _jbyte_arraycopy; static address _jshort_arraycopy; static address _jint_arraycopy; static address _jlong_arraycopy; static address _oop_arraycopy, _oop_arraycopy_uninit; static address _jbyte_disjoint_arraycopy; static address _jshort_disjoint_arraycopy; static address _jint_disjoint_arraycopy; static address _jlong_disjoint_arraycopy; static address _oop_disjoint_arraycopy, _oop_disjoint_arraycopy_uninit; // arraycopy operands aligned on zero'th element boundary // These are identical to the ones aligned aligned on an // element type boundary, except that they assume that both // source and destination are HeapWord aligned. static address _arrayof_jbyte_arraycopy; static address _arrayof_jshort_arraycopy; static address _arrayof_jint_arraycopy; static address _arrayof_jlong_arraycopy; static address _arrayof_oop_arraycopy, _arrayof_oop_arraycopy_uninit; static address _arrayof_jbyte_disjoint_arraycopy; static address _arrayof_jshort_disjoint_arraycopy; static address _arrayof_jint_disjoint_arraycopy; static address _arrayof_jlong_disjoint_arraycopy; static address _arrayof_oop_disjoint_arraycopy, _arrayof_oop_disjoint_arraycopy_uninit; // these are recommended but optional: static address _checkcast_arraycopy, _checkcast_arraycopy_uninit; static address _unsafe_arraycopy; static address _generic_arraycopy; static address _jbyte_fill; static address _jshort_fill; static address _jint_fill; static address _arrayof_jbyte_fill; static address _arrayof_jshort_fill; static address _arrayof_jint_fill; // zero heap space aligned to jlong (8 bytes) static address _zero_aligned_words; static address _aescrypt_encryptBlock; static address _aescrypt_decryptBlock; static address _cipherBlockChaining_encryptAESCrypt; static address _cipherBlockChaining_decryptAESCrypt; static address _ghash_processBlocks; static address _sha1_implCompress; static address _sha1_implCompressMB; static address _sha256_implCompress; static address _sha256_implCompressMB; static address _sha512_implCompress; static address _sha512_implCompressMB; static address _updateBytesCRC32; static address _crc_table_adr; static address _multiplyToLen; static address _squareToLen; static address _mulAdd; static address _montgomeryMultiply; static address _montgomerySquare; // These are versions of the java.lang.Math methods which perform // the same operations as the intrinsic version. They are used for // constant folding in the compiler to ensure equivalence. If the // intrinsic version returns the same result as the strict version // then they can be set to the appropriate function from // SharedRuntime. static double (*_intrinsic_log)(double); static double (*_intrinsic_log10)(double); static double (*_intrinsic_exp)(double); static double (*_intrinsic_pow)(double, double); static double (*_intrinsic_sin)(double); static double (*_intrinsic_cos)(double); static double (*_intrinsic_tan)(double); // Safefetch stubs. static address _safefetch32_entry; static address _safefetch32_fault_pc; static address _safefetch32_continuation_pc; static address _safefetchN_entry; static address _safefetchN_fault_pc; static address _safefetchN_continuation_pc; public: // Initialization/Testing static void initialize1(); // must happen before universe::genesis static void initialize2(); // must happen after universe::genesis static bool is_stub_code(address addr) { return contains(addr); } static bool contains(address addr) { return (_code1 != NULL && _code1->blob_contains(addr)) || (_code2 != NULL && _code2->blob_contains(addr)) ; } static CodeBlob* code1() { return _code1; } static CodeBlob* code2() { return _code2; } // Debugging static jint verify_oop_count() { return _verify_oop_count; } static jint* verify_oop_count_addr() { return &_verify_oop_count; } // a subroutine for debugging the GC static address verify_oop_subroutine_entry_address() { return (address)&_verify_oop_subroutine_entry; } static address catch_exception_entry() { return _catch_exception_entry; } // Calls to Java typedef void (*CallStub)( address link, intptr_t* result, BasicType result_type, Method* method, address entry_point, intptr_t* parameters, int size_of_parameters, TRAPS ); static CallStub call_stub() { return CAST_TO_FN_PTR(CallStub, _call_stub_entry); } // Exceptions static address forward_exception_entry() { return _forward_exception_entry; } // Implicit exceptions static address throw_AbstractMethodError_entry() { return _throw_AbstractMethodError_entry; } static address throw_IncompatibleClassChangeError_entry(){ return _throw_IncompatibleClassChangeError_entry; } static address throw_NullPointerException_at_call_entry(){ return _throw_NullPointerException_at_call_entry; } static address throw_StackOverflowError_entry() { return _throw_StackOverflowError_entry; } // Exceptions during unsafe access - should throw Java exception rather // than crash. static address handler_for_unsafe_access() { return _handler_for_unsafe_access_entry; } static address atomic_xchg_entry() { return _atomic_xchg_entry; } static address atomic_xchg_ptr_entry() { return _atomic_xchg_ptr_entry; } static address atomic_store_entry() { return _atomic_store_entry; } static address atomic_store_ptr_entry() { return _atomic_store_ptr_entry; } static address atomic_cmpxchg_entry() { return _atomic_cmpxchg_entry; } static address atomic_cmpxchg_ptr_entry() { return _atomic_cmpxchg_ptr_entry; } static address atomic_cmpxchg_long_entry() { return _atomic_cmpxchg_long_entry; } static address atomic_add_entry() { return _atomic_add_entry; } static address atomic_add_ptr_entry() { return _atomic_add_ptr_entry; } static address fence_entry() { return _fence_entry; } static address d2i_wrapper() { return _d2i_wrapper; } static address d2l_wrapper() { return _d2l_wrapper; } static jint fpu_cntrl_wrd_std() { return _fpu_cntrl_wrd_std; } static address addr_fpu_cntrl_wrd_std() { return (address)&_fpu_cntrl_wrd_std; } static address addr_fpu_cntrl_wrd_24() { return (address)&_fpu_cntrl_wrd_24; } static address addr_fpu_cntrl_wrd_64() { return (address)&_fpu_cntrl_wrd_64; } static address addr_fpu_cntrl_wrd_trunc() { return (address)&_fpu_cntrl_wrd_trunc; } static address addr_mxcsr_std() { return (address)&_mxcsr_std; } static address addr_fpu_subnormal_bias1() { return (address)&_fpu_subnormal_bias1; } static address addr_fpu_subnormal_bias2() { return (address)&_fpu_subnormal_bias2; } static address select_arraycopy_function(BasicType t, bool aligned, bool disjoint, const char* &name, bool dest_uninitialized); static address jbyte_arraycopy() { return _jbyte_arraycopy; } static address jshort_arraycopy() { return _jshort_arraycopy; } static address jint_arraycopy() { return _jint_arraycopy; } static address jlong_arraycopy() { return _jlong_arraycopy; } static address oop_arraycopy(bool dest_uninitialized = false) { return dest_uninitialized ? _oop_arraycopy_uninit : _oop_arraycopy; } static address jbyte_disjoint_arraycopy() { return _jbyte_disjoint_arraycopy; } static address jshort_disjoint_arraycopy() { return _jshort_disjoint_arraycopy; } static address jint_disjoint_arraycopy() { return _jint_disjoint_arraycopy; } static address jlong_disjoint_arraycopy() { return _jlong_disjoint_arraycopy; } static address oop_disjoint_arraycopy(bool dest_uninitialized = false) { return dest_uninitialized ? _oop_disjoint_arraycopy_uninit : _oop_disjoint_arraycopy; } static address arrayof_jbyte_arraycopy() { return _arrayof_jbyte_arraycopy; } static address arrayof_jshort_arraycopy() { return _arrayof_jshort_arraycopy; } static address arrayof_jint_arraycopy() { return _arrayof_jint_arraycopy; } static address arrayof_jlong_arraycopy() { return _arrayof_jlong_arraycopy; } static address arrayof_oop_arraycopy(bool dest_uninitialized = false) { return dest_uninitialized ? _arrayof_oop_arraycopy_uninit : _arrayof_oop_arraycopy; } static address arrayof_jbyte_disjoint_arraycopy() { return _arrayof_jbyte_disjoint_arraycopy; } static address arrayof_jshort_disjoint_arraycopy() { return _arrayof_jshort_disjoint_arraycopy; } static address arrayof_jint_disjoint_arraycopy() { return _arrayof_jint_disjoint_arraycopy; } static address arrayof_jlong_disjoint_arraycopy() { return _arrayof_jlong_disjoint_arraycopy; } static address arrayof_oop_disjoint_arraycopy(bool dest_uninitialized = false) { return dest_uninitialized ? _arrayof_oop_disjoint_arraycopy_uninit : _arrayof_oop_disjoint_arraycopy; } static address checkcast_arraycopy(bool dest_uninitialized = false) { return dest_uninitialized ? _checkcast_arraycopy_uninit : _checkcast_arraycopy; } static address unsafe_arraycopy() { return _unsafe_arraycopy; } static address generic_arraycopy() { return _generic_arraycopy; } static address jbyte_fill() { return _jbyte_fill; } static address jshort_fill() { return _jshort_fill; } static address jint_fill() { return _jint_fill; } static address arrayof_jbyte_fill() { return _arrayof_jbyte_fill; } static address arrayof_jshort_fill() { return _arrayof_jshort_fill; } static address arrayof_jint_fill() { return _arrayof_jint_fill; } static address aescrypt_encryptBlock() { return _aescrypt_encryptBlock; } static address aescrypt_decryptBlock() { return _aescrypt_decryptBlock; } static address cipherBlockChaining_encryptAESCrypt() { return _cipherBlockChaining_encryptAESCrypt; } static address cipherBlockChaining_decryptAESCrypt() { return _cipherBlockChaining_decryptAESCrypt; } static address ghash_processBlocks() { return _ghash_processBlocks; } static address sha1_implCompress() { return _sha1_implCompress; } static address sha1_implCompressMB() { return _sha1_implCompressMB; } static address sha256_implCompress() { return _sha256_implCompress; } static address sha256_implCompressMB() { return _sha256_implCompressMB; } static address sha512_implCompress() { return _sha512_implCompress; } static address sha512_implCompressMB() { return _sha512_implCompressMB; } static address updateBytesCRC32() { return _updateBytesCRC32; } static address crc_table_addr() { return _crc_table_adr; } static address multiplyToLen() {return _multiplyToLen; } static address squareToLen() {return _squareToLen; } static address mulAdd() {return _mulAdd; } static address montgomeryMultiply() { return _montgomeryMultiply; } static address montgomerySquare() { return _montgomerySquare; } static address select_fill_function(BasicType t, bool aligned, const char* &name); static address zero_aligned_words() { return _zero_aligned_words; } static double intrinsic_log(double d) { assert(_intrinsic_log != NULL, "must be defined"); return _intrinsic_log(d); } static double intrinsic_log10(double d) { assert(_intrinsic_log != NULL, "must be defined"); return _intrinsic_log10(d); } static double intrinsic_exp(double d) { assert(_intrinsic_exp != NULL, "must be defined"); return _intrinsic_exp(d); } static double intrinsic_pow(double d, double d2) { assert(_intrinsic_pow != NULL, "must be defined"); return _intrinsic_pow(d, d2); } static double intrinsic_sin(double d) { assert(_intrinsic_sin != NULL, "must be defined"); return _intrinsic_sin(d); } static double intrinsic_cos(double d) { assert(_intrinsic_cos != NULL, "must be defined"); return _intrinsic_cos(d); } static double intrinsic_tan(double d) { assert(_intrinsic_tan != NULL, "must be defined"); return _intrinsic_tan(d); } // // Safefetch stub support // typedef int (*SafeFetch32Stub)(int* adr, int errValue); typedef intptr_t (*SafeFetchNStub) (intptr_t* adr, intptr_t errValue); static SafeFetch32Stub SafeFetch32_stub() { return CAST_TO_FN_PTR(SafeFetch32Stub, _safefetch32_entry); } static SafeFetchNStub SafeFetchN_stub() { return CAST_TO_FN_PTR(SafeFetchNStub, _safefetchN_entry); } static bool is_safefetch_fault(address pc) { return pc != NULL && (pc == _safefetch32_fault_pc || pc == _safefetchN_fault_pc); } static address continuation_for_safefetch_fault(address pc) { assert(_safefetch32_continuation_pc != NULL && _safefetchN_continuation_pc != NULL, "not initialized"); if (pc == _safefetch32_fault_pc) return _safefetch32_continuation_pc; if (pc == _safefetchN_fault_pc) return _safefetchN_continuation_pc; ShouldNotReachHere(); return NULL; } // // Default versions of the above arraycopy functions for platforms which do // not have specialized versions // static void jbyte_copy (jbyte* src, jbyte* dest, size_t count); static void jshort_copy (jshort* src, jshort* dest, size_t count); static void jint_copy (jint* src, jint* dest, size_t count); static void jlong_copy (jlong* src, jlong* dest, size_t count); static void oop_copy (oop* src, oop* dest, size_t count); static void oop_copy_uninit(oop* src, oop* dest, size_t count); static void arrayof_jbyte_copy (HeapWord* src, HeapWord* dest, size_t count); static void arrayof_jshort_copy (HeapWord* src, HeapWord* dest, size_t count); static void arrayof_jint_copy (HeapWord* src, HeapWord* dest, size_t count); static void arrayof_jlong_copy (HeapWord* src, HeapWord* dest, size_t count); static void arrayof_oop_copy (HeapWord* src, HeapWord* dest, size_t count); static void arrayof_oop_copy_uninit(HeapWord* src, HeapWord* dest, size_t count); }; // Safefetch allows to load a value from a location that's not known // to be valid. If the load causes a fault, the error value is returned. inline int SafeFetch32(int* adr, int errValue) { assert(StubRoutines::SafeFetch32_stub(), "stub not yet generated"); return StubRoutines::SafeFetch32_stub()(adr, errValue); } inline intptr_t SafeFetchN(intptr_t* adr, intptr_t errValue) { assert(StubRoutines::SafeFetchN_stub(), "stub not yet generated"); return StubRoutines::SafeFetchN_stub()(adr, errValue); } #endif // SHARE_VM_RUNTIME_STUBROUTINES_HPP