如何在Zedboard上的C ++程序中查看时间表演[英] How to check time performances in a C++ program on Zedboard

本文是小编为大家收集整理的关于如何在Zedboard上的C ++程序中查看时间表演的处理/解决方法,可以参考本文帮助大家快速定位并解决问题,中文翻译不准确的可切换到English标签页查看源文。

问题描述

我已经在ZEDBOALD上实现了C ++代码.它编译并运行完美,但是现在我想检查性能以优化某些功能. 我在这里检查了一些线程(测试C ++ App的性能)和这里(计时器功能在纳米秒内使用C ++提供时间功能),但我真的不明白如何应用MON代码...

要使事情变得清晰:我不擅长C ++,我从未真正真正学到了该语言,但仅在特定的库中使用了几次.我什至不是我使用的代码的作者(教授给我).

我的目标是检查每个功能上花费的时间,以及在ZEDBOARD上执行程序时在全球范围内的时间.该代码位于SD卡上的Linux图像上,该图像上的板启动.它正在使用OpenCV库进行动画处理应用程序.我正在使用G ++ 4.6.3作为编译器.

预先感谢您的回答!

推荐答案

您可以使用<chrono>标头创建一个简单的计时器类.这样的东西:

class Timer
{
public:
    using clock = std::chrono::steady_clock;

    void clear() { start(); tse = tsb; }
    void start() { tsb = clock::now(); }
    void stop()  { tse = clock::now(); }

    auto nsecs() const
    {
        using namespace std::chrono;
        return duration_cast<nanoseconds>(tse - tsb).count();
    }

    double usecs() const { return double(nsecs()) / 1000.0; }
    double msecs() const { return double(nsecs()) / 1000000.0; }
    double  secs() const { return double(nsecs()) / 1000000000.0; }

    friend std::ostream& operator<<(std::ostream& o, Timer const& timer)
    {
        return o << timer.secs();
    }

private:
    clock::time_point tsb;
    clock::time_point tse;
};

您可以简单地使用它:

Timer timer;

timer.start();

// do some stuff
std::this_thread::sleep_for(std::chrono::milliseconds(600));

timer.stop();

std::cout << timer << " seconds" << '\n';

编辑:在POSIX系统上,您可以使用clock_gettime()如果<chrono>不可用:

class Timer
{
public:
    void clear() { start(); tse = tsb; }
    void start() { clock_gettime(CLOCK_MONOTONIC, &tsb); }
    void stop() { clock_gettime(CLOCK_MONOTONIC, &tse); }

    long nsecs() const
    {
        long b = (tsb.tv_sec * 1000000000) + tsb.tv_nsec;
        long e = (tse.tv_sec * 1000000000) + tse.tv_nsec;
        return e - b;
    }

    double usecs() const { return double(nsecs()) / 1000.0; }
    double msecs() const { return double(nsecs()) / 1000000.0; }
    double  secs() const { return double(nsecs()) / 1000000000.0; }

    friend std::ostream& operator<<(std::ostream& o, Timer const& timer)
    {
        return o << timer.secs();
    }

private:
    timespec tsb;
    timespec tse;
};

其他推荐答案

我找到了一个不满意的解决方案,但是我认为如果可以有任何帮助,我仍然可以发布它.

我使用了<time.h>中定义的gettimeofday()函数.它使用非常简单,但有缺陷,我稍后可能会解释:

timeval t1, t2;
gettimeofday(&t1, NULL);
/* some function */
gettimeofday(&t2, NULL);
double time;
time = (t2.tv_sec - t1.tv_sec)*1000.0 + (t2.tv_usec - t1.tv_usec)/1000.0; 
cout << time << "ms" << "\n";

这样,我以毫秒为单位测量时间并在屏幕上显示.但是,gettimeofday不是基于计算机时钟,而是基于实际时间.需要明确的是,2个调用之间经过的时间确实包含我的功能,也包含了Ubuntu上的背景中运行的所有过程.换句话说,这并不能给我我的功能执行的精确时间,而是一个值略高.

编辑:使用<time.h>的clock()函数,我再次找到了另一个解决方案,与以前的方法相比,结果似乎是正确的.不幸的是,精度还不够

本文地址:https://www.itbaoku.cn/post/2090955.html

问题描述

I have implemented a C++ code on a Zedboard. It compiles and runs perfectly, but now i would like to check the performances in order to optimize some functions. I have checked some threads here (Testing the performance of a C++ app) and here (Timer function to provide time in nano seconds using C++), but i don't really understand how to apply it mon code ...

To make things clear : I'm not good at C++, I have never really learned the language formally but only used it several times with specific libraries. I am not even the author of the code I'm using (given to me by the professors).

My goal here is to check the time spent on each functions and globally when I execute the program on the Zedboard. The code is on Linux image on an SD card, the board booting on this image. It is using the opencv library for anImage processing application. I'm using g++ 4.6.3 as a compiler.

Thanks in advance for your answer !

推荐答案

You can create a simple timer class using the <chrono> header. Something like this:

class Timer
{
public:
    using clock = std::chrono::steady_clock;

    void clear() { start(); tse = tsb; }
    void start() { tsb = clock::now(); }
    void stop()  { tse = clock::now(); }

    auto nsecs() const
    {
        using namespace std::chrono;
        return duration_cast<nanoseconds>(tse - tsb).count();
    }

    double usecs() const { return double(nsecs()) / 1000.0; }
    double msecs() const { return double(nsecs()) / 1000000.0; }
    double  secs() const { return double(nsecs()) / 1000000000.0; }

    friend std::ostream& operator<<(std::ostream& o, Timer const& timer)
    {
        return o << timer.secs();
    }

private:
    clock::time_point tsb;
    clock::time_point tse;
};

You can use it simply like this:

Timer timer;

timer.start();

// do some stuff
std::this_thread::sleep_for(std::chrono::milliseconds(600));

timer.stop();

std::cout << timer << " seconds" << '\n';

EDIT: On POSIX systems you can use clock_gettime() if <chrono> is not available:

class Timer
{
public:
    void clear() { start(); tse = tsb; }
    void start() { clock_gettime(CLOCK_MONOTONIC, &tsb); }
    void stop() { clock_gettime(CLOCK_MONOTONIC, &tse); }

    long nsecs() const
    {
        long b = (tsb.tv_sec * 1000000000) + tsb.tv_nsec;
        long e = (tse.tv_sec * 1000000000) + tse.tv_nsec;
        return e - b;
    }

    double usecs() const { return double(nsecs()) / 1000.0; }
    double msecs() const { return double(nsecs()) / 1000000.0; }
    double  secs() const { return double(nsecs()) / 1000000000.0; }

    friend std::ostream& operator<<(std::ostream& o, Timer const& timer)
    {
        return o << timer.secs();
    }

private:
    timespec tsb;
    timespec tse;
};

其他推荐答案

I have found an unsatisfying solution, but I thought I could still post it if it can be of any help.

I made use of the gettimeofday() function defined in <time.h>. It's pretty simple to use but has flaws which i may explain later :

timeval t1, t2;
gettimeofday(&t1, NULL);
/* some function */
gettimeofday(&t2, NULL);
double time;
time = (t2.tv_sec - t1.tv_sec)*1000.0 + (t2.tv_usec - t1.tv_usec)/1000.0; 
cout << time << "ms" << "\n";

This way I measure a time in milliseconds and display it on screen. However gettimeofday is not based on the computer clock but on the actual time. To be clear, the time elapsed between 2 calls contains indeed my function, but also all the processes running in the background on Ubuntu. In other terms, this does not give me the precise time taken by my function to execute, but a value slightly higher.

EDIT : I found another solution again, using the clock() function from <time.h> and the result seems correct in comparison of the ones i get with the previous method. Unfortunately, the precision is not enough since it gives a number in seconds with only 3 digits.