Calculating .5*x and x/2
Calculating .5*x and x/2
I heard that a computer does the operation .5*x faster than x/2. Is this true? Can you please tell me why or how this works?
Thank you.
Thank you.
I think you're getting confused by talking about binary arithmetic using decimal numbers and the shortcuts that have been developed to multiply and divide them on a blackboard.
Anyway, I suppose that how fast a computer takes to multiply and divide, especially by two, depends on the algorithm used and how good the programmer was. In other words, you'll have to look at the code to find the answer to your question.
Anyway, I suppose that how fast a computer takes to multiply and divide, especially by two, depends on the algorithm used and how good the programmer was. In other words, you'll have to look at the code to find the answer to your question.
Last edited by Flash on Thu 15 Sep 2011, 03:30, edited 1 time in total.
In my quick test, I think the weak link in the speed chain is monitor display time.
binary and source attached
~
Code: Select all
#include <stdio.h>
int main() {
double x = 987654321;
printf("%f\n", x*.5);
printf("%f\n", .5*x);
printf("%f\n", x/2);
}
~
- Attachments
-
- fun-test.zip
- (2.48 KiB) Downloaded 170 times
Keep in mind that no matter what programing language you are using,
once compiled it will run as machine code on the CPU.
Assuming a X86 CPU, factors such as, is the number a 8,16,or 32 bit one
and is it in memory or preloaded into a register all make a difference.
e.g.
MOV ax, 0x400
MOV bx, 0x002
MUL bx ;(or DIV, answer now in ax)
According to the Intel specs, a MUL takes on average 15 "clocks"
where as a DIV takes on average 16.
I say average because it depends on number of bits and if in regs or mem.
But all this is really academic since the time taken by all the other functions
and routines in the rest of the high level C program will take much much longer then the
slight difference between a MUL and a DIV.
So unless you happen to be doing millions and millions of DIV or MUL operations, you will not notice any advantage of one method over the other.
EDIT:
If you want to try it for yourself compile these two short pgms with NASM
then use 'strace -t' to get an aproximation of execute time take.
Please note that neither pgm gives any output to the screen, it's simply
used to get an idea of computation time taken for a DIV or MUL:
Here is the one for MUL, save it as mul.asm then compile.
Here is the one for DIV, save it as div.asm then compile.
This way you have no high level stuff slowing down the pgm.
ANOTHER EDIT:
Remember that a lot of these "myths" about code optimization techniques
hark back to the days of the 386 (and before) where certain combinations
of op-codes would yield a slightly faster execution time.
The CPUs of today all have pre-fetch buffers and other code optimization
techniques built in and run at such high speeds compared to their "ancient"
cousins that it's simply not worth worrying about.
Another popular one was when initializing a variable to zero, it was faster
to do a XOR eax, eax rather then a MOV eax, 0
Dave.
once compiled it will run as machine code on the CPU.
Assuming a X86 CPU, factors such as, is the number a 8,16,or 32 bit one
and is it in memory or preloaded into a register all make a difference.
e.g.
MOV ax, 0x400
MOV bx, 0x002
MUL bx ;(or DIV, answer now in ax)
According to the Intel specs, a MUL takes on average 15 "clocks"
where as a DIV takes on average 16.
I say average because it depends on number of bits and if in regs or mem.
But all this is really academic since the time taken by all the other functions
and routines in the rest of the high level C program will take much much longer then the
slight difference between a MUL and a DIV.
So unless you happen to be doing millions and millions of DIV or MUL operations, you will not notice any advantage of one method over the other.
EDIT:
If you want to try it for yourself compile these two short pgms with NASM
then use 'strace -t' to get an aproximation of execute time take.
Please note that neither pgm gives any output to the screen, it's simply
used to get an idea of computation time taken for a DIV or MUL:
Here is the one for MUL, save it as mul.asm then compile.
Code: Select all
section .text
global _start
_start:
mov eax, 400
mov ebx, 2
mul ebx
mov eax,1 ;system call number (sys_exit)
int 0x80 ;call kernel
Code: Select all
section .text
global _start
_start:
mov eax, 400
mov ebx, 2
div ebx
mov eax,1 ;system call number (sys_exit)
int 0x80 ;call kernel
ANOTHER EDIT:
Remember that a lot of these "myths" about code optimization techniques
hark back to the days of the 386 (and before) where certain combinations
of op-codes would yield a slightly faster execution time.
The CPUs of today all have pre-fetch buffers and other code optimization
techniques built in and run at such high speeds compared to their "ancient"
cousins that it's simply not worth worrying about.
Another popular one was when initializing a variable to zero, it was faster
to do a XOR eax, eax rather then a MOV eax, 0
Dave.
- L18L
- Posts: 3479
- Joined: Sat 19 Jun 2010, 18:56
- Location: www.eussenheim.de/
- L18L
- Posts: 3479
- Joined: Sat 19 Jun 2010, 18:56
- Location: www.eussenheim.de/
Setting this bit might explain the difference?mahaju wrote:I thought at the machine level such kind of test happened on hardware and not using code, for example, if division by zero is done some kind of overflow flag bit is set in the microprocessor's PSW register
Dave_G wrote:According to the Intel specs, a MUL takes on average 15 "clocks"
where as a DIV takes on average 16.
Note that the CPU does not do a check for dividing (or multiplying) by zero
before it does the actual operation.
Like wise if an overflow occurs.
If it does encounter these conditions then the flag is set.
@mahaju,
The flags register in the x86 range is not called the PSW register, are you
sure you are referring to the same CPU and not some microprocessor such
as the PIC or AVR?
Dave.
before it does the actual operation.
Like wise if an overflow occurs.
If it does encounter these conditions then the flag is set.
@mahaju,
The flags register in the x86 range is not called the PSW register, are you
sure you are referring to the same CPU and not some microprocessor such
as the PIC or AVR?
Dave.