copying

In what cases should I use memcpy over standard operators in C++?

☆樱花仙子☆ 提交于 2019-11-28 18:14:49
When can I get better performance using memcpy or how do I benefit from using it? For example: float a[3]; float b[3]; is code: memcpy(a, b, 3*sizeof(float)); faster than this one? a[0] = b[0]; a[1] = b[1]; a[2] = b[2]; Efficiency should not be your concern. Write clean maintainable code. It bothers me that so many answers indicate that the memcpy() is inefficient. It is designed to be the most efficient way of copy blocks of memory (for C programs). So I wrote the following as a test: #include <algorithm> extern float a[3]; extern float b[3]; extern void base(); int main() { base(); #if

C# memcpy equivalent

泪湿孤枕 提交于 2019-11-28 12:15:30
I have 2 objects from the same type and i would like to shallow copy one state to the other. In C++ i have memcpy which is great. How can i do it in C#? The MemberwiseClone() is not good enough because it creates & returns a new object and i like to copy to an existing object. I thought of using reflection but i'm afraid it will be too slow for production code. I also thought of using one of the .Net serializers but i think they also create object rather than setting an existing one. My Use Case: I have a template object (class not struct) which needs to be updated by one of its instances

How do I read hex numbers into an unsigned int in C

家住魔仙堡 提交于 2019-11-27 18:57:32
问题 I'm wanting to read hex numbers from a text file into an unsigned integer so that I can execute Machine instructions. It's just a simulation type thing that looks inside the text file and according to the values and its corresponding instruction outputs the new values in the registers. For example, the instructions would be: 1RXY -> Save register R with value in memory address XY 2RXY -> Save register R with value XY BRXY -> Jump to register R if xy is this and that etc.. ARXY -> AND register

putting a remote file into hadoop without copying it to local disk

久未见 提交于 2019-11-27 17:27:20
I am writing a shell script to put data into hadoop as soon as they are generated. I can ssh to my master node, copy the files to a folder over there and then put them into hadoop. I am looking for a shell command to get rid of copying the file to the local disk on master node. to better explain what I need, here below you can find what I have so far: 1) copy the file to the master node's local disk: scp test.txt username@masternode:/folderName/ I have already setup SSH connection using keys. So no password is needed to do this. 2) I can use ssh to remotely execute the hadoop put command: ssh

How to copy a huge table data into another table in SQL Server

|▌冷眼眸甩不掉的悲伤 提交于 2019-11-27 11:41:32
I have a table with 3.4 million rows. I want to copy this whole data into another table. I am performing this task using the below query: select * into new_items from productDB.dbo.items I need to know the best possible way to do this task. If you are copying into a new table, the quickest way is probably what you have in your question, unless your rows are very large. If your rows are very large, you may want to use the bulk insert functions in SQL Server. I think you can call them from C#. Or you can first download that data into a text file, then bulk-copy (bcp) it. This has the additional

In what cases should I use memcpy over standard operators in C++?

那年仲夏 提交于 2019-11-27 10:31:38
问题 When can I get better performance using memcpy or how do I benefit from using it? For example: float a[3]; float b[3]; is code: memcpy(a, b, 3*sizeof(float)); faster than this one? a[0] = b[0]; a[1] = b[1]; a[2] = b[2]; 回答1: Efficiency should not be your concern. Write clean maintainable code. It bothers me that so many answers indicate that the memcpy() is inefficient. It is designed to be the most efficient way of copy blocks of memory (for C programs). So I wrote the following as a test:

copying the contents of a binary file

筅森魡賤 提交于 2019-11-27 09:22:30
I am designing an image decoder and as a first step I tried to just copy the using c. i.e open the file, and write its contents to a new file. Below is the code that I used. while((c=getc(fp))!=EOF) fprintf(fp1,"%c",c); where fp is the source file and fp1 is the destination file. The program executes without any error, but the image file(".bmp") is not properly copied. I have observed that the size of the copied file is less and only 20% of the image is visible, all else is black. When I tried with simple text files, the copy was complete. Do you know what the problem is? Make sure that the

Progress during large file copy (Copy-Item & Write-Progress?)

天涯浪子 提交于 2019-11-26 23:59:18
Is there any way to copy a really large file (from one server to another) in PowerShell AND display its progress? There are solutions out there to use Write-Progress in conjunction with looping to copy many files and display progress. However I can't seem to find anything that would show progress of a single file. Any thoughts? stej I haven't heard about progress with Copy-Item . If you don't want to use any external tool, you can experiment with streams. The size of buffer varies, you may try different values (from 2kb to 64kb). function Copy-File { param( [string]$from, [string]$to) $ffile =

putting a remote file into hadoop without copying it to local disk

心不动则不痛 提交于 2019-11-26 18:59:18
问题 I am writing a shell script to put data into hadoop as soon as they are generated. I can ssh to my master node, copy the files to a folder over there and then put them into hadoop. I am looking for a shell command to get rid of copying the file to the local disk on master node. to better explain what I need, here below you can find what I have so far: 1) copy the file to the master node's local disk: scp test.txt username@masternode:/folderName/ I have already setup SSH connection using keys.

How to copy a huge table data into another table in SQL Server

烈酒焚心 提交于 2019-11-26 15:35:37
问题 I have a table with 3.4 million rows. I want to copy this whole data into another table. I am performing this task using the below query: select * into new_items from productDB.dbo.items I need to know the best possible way to do this task. 回答1: If you are copying into a new table, the quickest way is probably what you have in your question, unless your rows are very large. If your rows are very large, you may want to use the bulk insert functions in SQL Server. I think you can call them from