atoi on a character array with lots of integers

前端 未结 2 404
感情败类
感情败类 2021-01-16 15:02

I have a code in which the character array is populated by integers (converted to char arrays), and read by another function which reconverts it back to integers. I have use

2条回答
  •  迷失自我
    2021-01-16 15:23

    You are copying only a 4 characters (dependent on your system's pointer width). This will leave numbers of 4+ characters non-null terminated, leading to runaway strings in the input to atoi

     sizeof(str.c_str()) //i.e. sizeof(char*) = 4 (32 bit systems)
    

    should be

     str.length() + 1
    

    Or the characters will not be nullterminated

    STL Only:

    make_testdata(): see all the way down

    Why don't you use streams...?

    #include 
    #include 
    #include 
    #include 
    #include 
    #include 
    
    int main()
    {
        std::vector data = make_testdata();
    
        std::ostringstream oss;
        std::copy(data.begin(), data.end(), std::ostream_iterator(oss, "\t"));
    
        std::stringstream iss(oss.str());
    
        std::vector clone;
        std::copy(std::istream_iterator(iss), std::istream_iterator(),
                  std::back_inserter(clone));
    
        //verify that clone now contains the original random data:
        //bool ok = std::equal(data.begin(), data.end(), clone.begin());
    
        return 0;
    }
    

    You could do it a lot faster in plain C with atoi/itoa and some tweaks, but I reckon you should be using binary transmission (see Boost Spirit Karma and protobuf for good libraries) if you need the speed.

    Boost Karma/Qi:

    #include 
    #include 
    
    namespace qi=::boost::spirit::qi;
    namespace karma=::boost::spirit::karma;
    
    static const char delimiter = '\0';
    
    int main()
    {
        std::vector data = make_testdata();
    
        std::string astext;
    //  astext.reserve(3 * sizeof(data[0]) * data.size()); // heuristic pre-alloc
        std::back_insert_iterator out(astext);
    
        {
            using namespace karma;
            generate(out, delimit(delimiter) [ *int_ ], data);
        //  generate_delimited(out, *int_, delimiter, data); // equivalent
        //  generate(out, int_ % delimiter, data); // somehow much slower!
        }
    
        std::string::const_iterator begin(astext.begin()), end(astext.end());
        std::vector clone;
        qi::parse(begin, end, qi::int_ % delimiter, clone);
    
        //verify that clone now contains the original random data:
        //bool ok = std::equal(data.begin(), data.end(), clone.begin());
    
        return 0;
    }
    

    If you wanted to do architecture independent binary serialization instead, you'd use this tiny adaptation making things a zillion times faster (see benchmark below...):

    karma::generate(out, *karma::big_dword, data);
    // ...
    qi::parse(begin, end, *qi::big_dword, clone);
    

    Boost Serialization

    The best performance can be reached when using Boost Serialization in binary mode:

    #include 
    #include 
    #include 
    #include 
    
    int main()
    {
        std::vector data = make_testdata();
    
        std::stringstream ss;
        {
            boost::archive::binary_oarchive oa(ss);
            oa << data;
        }
    
        std::vector clone;
        {
            boost::archive::binary_iarchive ia(ss);
            ia >> clone;
        }
    
        //verify that clone now contains the original random data:
        //bool ok = std::equal(data.begin(), data.end(), clone.begin());
    
        return 0;
    }
    

    Testdata

    (common to all versions above)

    #include 
    
    // generates a deterministic pseudo-random vector of 32Mio ints
    std::vector make_testdata()
    {
        std::vector testdata;
    
        testdata.resize(2 << 24);
        std::generate(testdata.begin(), testdata.end(), boost::mt19937(0));
    
        return testdata;
    }
    

    Benchmarks

    I benchmarked it by

    • using input data of 2<<24 (33554432) random integers
    • not displaying output (we don't want to measure the scrolling performance of our terminal)
    • the rough timings were
      • STL only version isn't too bad actually at 12.6s
      • Karma/Qi text version ran in 18s 5.1s, thanks to Arlen's hint at generate_delimited :)
      • Karma/Qi binary version (big_dword) in only 1.4s (roughly 12x 3-4x as fast)
      • Boost Serialization takes the cake with around 0.8s (or when subsituting text archives instead of binaries, around 13s)

提交回复
热议问题