llvm-ir

How do I get debug information from a function?

安稳与你 提交于 2019-12-21 16:59:47
问题 I've used Clang to compile a function with debug information enabled. For Instruction s there's the handy getDebugLoc() , but there's no such thing for Function s. Given a Function instance, how can I get the debug information (I'm guessing in DISubProgram form) for it? I've seen the guide entry explaining how that debug information is represented, and the metadata does contain a link back to the function, but there's apparently no link back. Am I supposed to iterate over all the metadata in

generating CFG for whole source code with LLVM

笑着哭i 提交于 2019-12-21 15:47:38
问题 Does anyone from LLVM community know if there is a way to generate CFG for the whole input source code using opt -dot-cfg foo.ll(.bc) ? as this one generates the CFG per function thus the connections between functions will be ignored. It seems that the older analyze tool has depreciated. 回答1: I wonder if you found any way to get interprocedural CFG. I found that inlining call functions by other inliner passes might be helpful but I couldn't be able to get it to work yet. I've posted this

generating CFG for whole source code with LLVM

隐身守侯 提交于 2019-12-21 15:46:12
问题 Does anyone from LLVM community know if there is a way to generate CFG for the whole input source code using opt -dot-cfg foo.ll(.bc) ? as this one generates the CFG per function thus the connections between functions will be ignored. It seems that the older analyze tool has depreciated. 回答1: I wonder if you found any way to get interprocedural CFG. I found that inlining call functions by other inliner passes might be helpful but I couldn't be able to get it to work yet. I've posted this

LLVM and compiler nomenclature

China☆狼群 提交于 2019-12-21 03:47:27
问题 I am looking into the LLVM system and I have read through the Getting Started documentation. However, some of the nomenclature (and the wording in the clang example) is still a little confusing. The following terms and commands are all part of the compilation process, and I was wondering if someone might be able to explain them a little better for me: clang -S vs. clang -c (I know what -c does, but how do the results differ?) * (Edit) LLVM Bitcode vs. LLVM IR (what is the difference?) .ll

Error in Compiling haskell .ll file with llvm backend

让人想犯罪 __ 提交于 2019-12-20 03:14:34
问题 I want to compile haskell using ghc front-end and llvm back-end. I have following code in my haskell hello.hs file: main = putStrLn "Hello World!" I compile hello.hs with ghc using following command ghc -fllvm -keep-llvm-files -force-recomp -hello.hs which generate a hello.ll file along with other files. I then try to compile this .ll file into a .bc file. llvm-as hello.ll -o hello.bc and then this .bc file to executable file llc hello.bc -o hello which generate an executable file. The

LLVM insert pthread function calls into IR

↘锁芯ラ 提交于 2019-12-19 04:12:25
问题 I'm writing a LLVM pass (it's a LoopPass ) that needs to insert pthread functions calls like pthread_create() and pthread_join() into the IR. I know how to create and insert a function call into IR, but I am having trouble to get the pthread representation of Function* in LLVM. Here's what I have: Function *pthread_create_func = currentModule->getFunction("pthread_create"); but it returns NULL. As a comparison Function *printf_func = currentModule->getFunction("printf"); will return the

Execute LLVM IR code generated from Rust/Python source code

徘徊边缘 提交于 2019-12-19 02:39:18
问题 When I generate LLVM IR Code from C++, I can use the console command clang++ -emit-llvm –S test.cpp to get a test.ll file which is the LLVM IR I want. To get an executable these are the steps to follow: llvm-as test.ll -> gives me the test.bc file. llc test.bc --o test.s -> gives me the test.s file. clang++ test.s -o test.native -> gives me a native file that i can execute. For C++ this works just fine. In theory, should the same steps apply when I write Rust or Python Code? I take my Rust

How to write a custom intermodular pass in LLVM?

邮差的信 提交于 2019-12-18 13:09:02
问题 I've written a standard Analysis pass in LLVM, by extending the FunctionPass class. Everything seems to make sense. Now what I'd like to do is write a couple of intermodular passes, that is, passes that allows me to analyze more than one module at a time. The purpose of one such pass is to construct a call graph of the entire application. The purpose of the other such pass is that I have an idea for an optimization involving function calls and their parameters. I know about interprocedural

How to write a custom intermodular pass in LLVM?

别来无恙 提交于 2019-12-18 13:08:03
问题 I've written a standard Analysis pass in LLVM, by extending the FunctionPass class. Everything seems to make sense. Now what I'd like to do is write a couple of intermodular passes, that is, passes that allows me to analyze more than one module at a time. The purpose of one such pass is to construct a call graph of the entire application. The purpose of the other such pass is that I have an idea for an optimization involving function calls and their parameters. I know about interprocedural

Call LLVM Jit from c program

允我心安 提交于 2019-12-17 22:08:20
问题 I have generated a bc file with the online compiler on llvm.org, and I would like to know if it is possible to load this bc file from a c or c++ program, execute the IR in the bc file with the llvm jit (programmatically in the c program), and get the results. How can I accomplish this? 回答1: Here's some working code based on Nathan Howell's: #include <string> #include <memory> #include <iostream> #include <llvm/LLVMContext.h> #include <llvm/Target/TargetSelect.h> #include <llvm/Bitcode