Catenation and specialization for Tcl virtual machine performance
Citations Over TimeTop 18% of 2004 papers
Abstract
We present techniques for eliminating dispatch overhead in a virtual machine interpreter using a lightweight just-in-time native-code compilation. In the context of the Tcl VM, we convert bytecodes to native Sparc code, by concatenating the native instructions used by the VM to implement each bytecode instruction. We thus eliminate the dispatch loop. Furthermore, immediate arguments of bytecode instructions are substituted into the native code using runtime specialization. Native code output from the C compiler is not amenable to relocation by copying; fix-up of the code is required for correct execution. The dynamic instruction count improvement from eliding dispatch depends on the length in native instructions of each bytecode opcode implementation. These are relatively long in Tcl, but dispatch is still a significant overhead. However, their length also causes our technique to overflow the instruction cache. Furthermore, our native compilation consumes runtime. Some benchmarks run up to three times faster, but roughly half slow down, or exhibit little change.
Related Papers
- → VulHunter: Hunting Vulnerable Smart Contracts at EVM Bytecode-Level via Multiple Instance Learning(2023)29 cited
- → Ethereum smart contracts: Analysis and statistics of their source code and opcodes(2020)40 cited
- → Bytecode-to-C ahead-of-time compilation for Android Dalvik virtual machine(2015)5 cited
- → A Selective Ahead-Of-Time Compiler on Android Device(2012)7 cited
- → Bytecode-to-C Ahead-of-Time Compilation for Android Dalvik Virtual Machine(2015)3 cited