[C++/LINUX] Comportement différent de mon chronomètre sous Linux par rapport à Windows
Bonjour,
J'ai trouvé sur le net une classe qui gère un chronomètre qu imarchent parfaitement sous WIndows mais sous Linux c'est beaucoup plus lent... On dirait que le chronomètre n'est pas géré pareil.
Code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92
| #include "timerClass.h"
///////////////////////////////////////////////////////////////////////
// timerClass.cpp // David Foricher - 03/05/04 //
///////////////////////////////////////////////////////////////////////
// The Timer class provides you an way to determine the exectution //
// time of a program or one of its functions. (precision : 10ms) //
///////////////////////////////////////////////////////////////////////
Timer::Timer():startClockPeriodsM(0),endClockPeriodsM(0),startedM(false){}
Timer::Timer(const Timer &t):startClockPeriodsM(t.startClockPeriods()), endClockPeriodsM(t.endClockPeriods()), startedM(false){}
Timer::~Timer(){}
void Timer::start()
{
// Record the number of clock ticks since the beginning of the execution
startClockPeriodsM=clock();
startedM=true;
}
void Timer::stop()
{
endClockPeriodsM=clock();
startedM=false;
}
void Timer::reset()
{
startClockPeriodsM = 0;
endClockPeriodsM=0;
startedM=false;
}
void Timer::resume()
{
if(startClockPeriodsM==0) start();
else startedM=true;
}
bool Timer::started() const{ return startedM; }
clock_t Timer::startClockPeriods() const{ return startClockPeriodsM; }
clock_t Timer::endClockPeriods() const{ return endClockPeriodsM; }
double Timer::getTime(){
// CLK_TCK is a constant that is equal to the number of clock ticks per second.
// To have the result in ms, it is just divided by 1000
if(started())
return (clock()/((long) CLOCKS_PER_SEC/1000)-startClockPeriods()/((long) CLOCKS_PER_SEC/1000));
else
{
//printf("%d | %d\n", endClockPeriodsM, startClockPeriodsM);
return (endClockPeriodsM/((long) CLOCKS_PER_SEC/1000)-startClockPeriodsM/((long) CLOCKS_PER_SEC/1000));
}
}
bool Timer::operator == (Timer& t) {
return (getTime()==t.getTime());
}
bool Timer::operator != (Timer& t) { return !((*this)==t); }
bool Timer::operator <= (Timer& t) { return getTime()<=t.getTime(); }
bool Timer::operator >= (Timer& t) { return getTime()>=t.getTime(); }
bool Timer::operator < (Timer & t) { return !((*this)>=t); }
bool Timer::operator > (Timer& t) { return !((*this)<=t); }
Timer& Timer::operator = (const Timer& t)
{
startClockPeriodsM=t.startClockPeriods();
endClockPeriodsM=t.endClockPeriods();
return (*this);
} |
Peut être qu'il y a des fonctions à ne pas utiliser sous Linux
...
A mon avis le problème vient de cette fonction :
Code:
1 2 3 4 5 6 7 8 9 10 11 12
| double Timer::getTime(){
// CLK_TCK is a constant that is equal to the number of clock ticks per second.
// To have the result in ms, it is just divided by 1000
if(started())
return (clock()/((long) CLOCKS_PER_SEC/1000)-startClockPeriods()/((long) CLOCKS_PER_SEC/1000));
else
{
//printf("%d | %d\n", endClockPeriodsM, startClockPeriodsM);
return (endClockPeriodsM/((long) CLOCKS_PER_SEC/1000)-startClockPeriodsM/((long) CLOCKS_PER_SEC/1000));
}
} |
Merci :)