high accuracy math library...

H

Hul Tytus

Guest
There was once a math library in c, if memory serves, with the
basic functions, ie +, -, * and / and some others also. The resolution
was adjustable so changing a reference variable (or was that a #define?)
from 32 to 256 would change the size of the variables to 256 bits.
Anyone rember the name or location of that library?

Hul
 
On Fri, 14 Jan 2022 16:50:17 -0000 (UTC), Hul Tytus <ht@panix.com>
wrote:

There was once a math library in c, if memory serves, with the
basic functions, ie +, -, * and / and some others also. The resolution
was adjustable so changing a reference variable (or was that a #define?)
from 32 to 256 would change the size of the variables to 256 bits.
Anyone rember the name or location of that library?

Hul

There seem to be a bunch:

https://en.wikipedia.org/wiki/List_of_arbitrary-precision_arithmetic_software



--

I yam what I yam - Popeye
 
On 14/01/2022 16:50, Hul Tytus wrote:
There was once a math library in c, if memory serves, with the
basic functions, ie +, -, * and / and some others also. The resolution
was adjustable so changing a reference variable (or was that a #define?)
from 32 to 256 would change the size of the variables to 256 bits.
Anyone rember the name or location of that library?

I don\'t recall that particular one but GCC can be fairly easily
persuaded to go up to 128 bit reals which are usually good enough for
all but the most insane of floating point calculations.

I think your choices there are limited to 32, 64, 80, 128

https://gcc.gnu.org/onlinedocs/gcc/Floating-Types.html

It includes the most common transcendental functions as well.

Quad floating precision runs slowly so do as much as you can at a lower
precision and then refine the answer using that as a seed value.

I used to like having 80 bit reals available in the good old prehistoric
days of MSC v6. Today it requires some effort to use them with MSC :(

--
Regards,
Martin Brown
 
On Fri, 14 Jan 2022 18:12:43 +0000, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 14/01/2022 16:50, Hul Tytus wrote:
There was once a math library in c, if memory serves, with the
basic functions, ie +, -, * and / and some others also. The resolution
was adjustable so changing a reference variable (or was that a #define?)
from 32 to 256 would change the size of the variables to 256 bits.
Anyone rember the name or location of that library?

I don\'t recall that particular one but GCC can be fairly easily
persuaded to go up to 128 bit reals which are usually good enough for
all but the most insane of floating point calculations.

I think your choices there are limited to 32, 64, 80, 128

https://gcc.gnu.org/onlinedocs/gcc/Floating-Types.html

It includes the most common transcendental functions as well.

Quad floating precision runs slowly so do as much as you can at a lower
precision and then refine the answer using that as a seed value.

I used to like having 80 bit reals available in the good old prehistoric
days of MSC v6. Today it requires some effort to use them with MSC :(

In the old days, only VAX/VMS had hardware support for 128-bit floats
(not IEEE format though). In the cited GCC list, which of these are
directly supported in hardware, versus software emulation?

Most current machines directly support multi precision integer
arithmetic for power-of-2 lengths, but it is done in multiple
coordinated machine-code operations, so it\'s partly in software.

Of course, when the word size goes up, the various approximations
polynomials must improve, which generally means to use higher-order
polynomials, so the slowdown isn\'t all due to slower computational
hardware.

The only real application of 128-bit floats that I am aware of was the
design of interferometers such as LIGO, where one is tracking very
small fractions of an optical wavelength over path lengths in the
kilometers, with at least two spare decimal digits to absorb numerical
noise from the ray-trace computations.

Joe Gwinn
 
On 15/1/22 3:50 am, Hul Tytus wrote:
There was once a math library in c, if memory serves, with the
basic functions, ie +, -, * and / and some others also. The resolution
was adjustable so changing a reference variable (or was that a #define?)
from 32 to 256 would change the size of the variables to 256 bits.
Anyone rember the name or location of that library?

You might be remembering the GNU Multiple Precision Library:
<https://gmplib.org/>

CH
 
Thanks for the references everyone.

Hul

jlarkin@highlandsniptechnology.com wrote:
On Fri, 14 Jan 2022 16:50:17 -0000 (UTC), Hul Tytus <ht@panix.com
wrote:

There was once a math library in c, if memory serves, with the
basic functions, ie +, -, * and / and some others also. The resolution
was adjustable so changing a reference variable (or was that a #define?)
from 32 to 256 would change the size of the variables to 256 bits.
Anyone rember the name or location of that library?

Hul

There seem to be a bunch:

https://en.wikipedia.org/wiki/List_of_arbitrary-precision_arithmetic_software


--

I yam what I yam - Popeye
 
On 14/01/2022 21:01, Joe Gwinn wrote:
On Fri, 14 Jan 2022 18:12:43 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 14/01/2022 16:50, Hul Tytus wrote:
There was once a math library in c, if memory serves, with the
basic functions, ie +, -, * and / and some others also. The resolution
was adjustable so changing a reference variable (or was that a #define?)
from 32 to 256 would change the size of the variables to 256 bits.
Anyone rember the name or location of that library?

I don\'t recall that particular one but GCC can be fairly easily
persuaded to go up to 128 bit reals which are usually good enough for
all but the most insane of floating point calculations.

I think your choices there are limited to 32, 64, 80, 128

https://gcc.gnu.org/onlinedocs/gcc/Floating-Types.html

It includes the most common transcendental functions as well.

Quad floating precision runs slowly so do as much as you can at a lower
precision and then refine the answer using that as a seed value.

I used to like having 80 bit reals available in the good old prehistoric
days of MSC v6. Today it requires some effort to use them with MSC :(

In the old days, only VAX/VMS had hardware support for 128-bit floats
(not IEEE format though). In the cited GCC list, which of these are
directly supported in hardware, versus software emulation?

32, 64 are native x87 and full SSE floating point support
80 x87 only but GCC does it fairly well
128 emulated and slower

Always work in the hardware supported ones to obtain an approximate
answer unless and until you need that extra precision.

Preferably frame it so you refine an approximate starting guess.
Most current machines directly support multi precision integer
arithmetic for power-of-2 lengths, but it is done in multiple
coordinated machine-code operations, so it\'s partly in software.

32, 64 and 128 integer support are sometimes native at least for some
platforms. +, - and * all execute in one nominal CPU cycle* too!
(at least for 32, 64 bit - I have never bothered with 128 bit int)

* sometimes they can appear to take less than one cycle due to out of
order execution and the opportunities to do work whilst divides are in
progress. Divides are always best avoided or if that is impossible their
number minimised. Divide is between 10-20x slower than all the other
primitive operations and two divides close together can be *much*
slower. Pipeline stalls typically cost around 90 cycles per hit.

divide remains a PITA and worth eliminating where possible.

I have an assembler implementation for a special case division that can
be faster than the hardware divide for the situation it aims to solve.

Basically 1/(1-x) = 1 + x + x^2 + x^3 + x^4 + ...
(1 + x)*(1 + x^2)*(1 + x^4)*(1 + x^8)

And for smallish x it converges faster than hardware FP divide.

Of course, when the word size goes up, the various approximations
polynomials must improve, which generally means to use higher-order
polynomials, so the slowdown isn\'t all due to slower computational
hardware.

There aren\'t all that many that need it.

Most planetary dynamics can be done with 80 bit reals with a bit to spare.

The only real application of 128-bit floats that I am aware of was the
design of interferometers such as LIGO, where one is tracking very
small fractions of an optical wavelength over path lengths in the
kilometers, with at least two spare decimal digits to absorb numerical
noise from the ray-trace computations.

That might be a genuine application.

The only times I have played with them have been to investigate the
weird constants that play a part in some chaotic equations. I was
curious to see how much of the behaviour was due to finite mantissa
length and how much was inherent in the mathematics. Doubling the length
of the mantissa goes a long way to solving that particular problem.
(but it is rather slow)

--
Regards,
Martin Brown
 
On Sat, 15 Jan 2022 17:50:14 +0000, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 14/01/2022 21:01, Joe Gwinn wrote:
On Fri, 14 Jan 2022 18:12:43 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 14/01/2022 16:50, Hul Tytus wrote:
There was once a math library in c, if memory serves, with the
basic functions, ie +, -, * and / and some others also. The resolution
was adjustable so changing a reference variable (or was that a #define?)
from 32 to 256 would change the size of the variables to 256 bits.
Anyone rember the name or location of that library?

I don\'t recall that particular one but GCC can be fairly easily
persuaded to go up to 128 bit reals which are usually good enough for
all but the most insane of floating point calculations.

I think your choices there are limited to 32, 64, 80, 128

https://gcc.gnu.org/onlinedocs/gcc/Floating-Types.html

It includes the most common transcendental functions as well.

Quad floating precision runs slowly so do as much as you can at a lower
precision and then refine the answer using that as a seed value.

I used to like having 80 bit reals available in the good old prehistoric
days of MSC v6. Today it requires some effort to use them with MSC :(

In the old days, only VAX/VMS had hardware support for 128-bit floats
(not IEEE format though). In the cited GCC list, which of these are
directly supported in hardware, versus software emulation?

32, 64 are native x87 and full SSE floating point support
80 x87 only but GCC does it fairly well

Power Basic has a native 80-bit float type and a 64-bit integer.



--

I yam what I yam - Popeye
 
On Saturday, January 15, 2022 at 12:58:54 PM UTC-5, jla...@highlandsniptechnology.com wrote:
On Sat, 15 Jan 2022 17:50:14 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 14/01/2022 21:01, Joe Gwinn wrote:
On Fri, 14 Jan 2022 18:12:43 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 14/01/2022 16:50, Hul Tytus wrote:
There was once a math library in c, if memory serves, with the
basic functions, ie +, -, * and / and some others also. The resolution
was adjustable so changing a reference variable (or was that a #define?)
from 32 to 256 would change the size of the variables to 256 bits.
Anyone rember the name or location of that library?

I don\'t recall that particular one but GCC can be fairly easily
persuaded to go up to 128 bit reals which are usually good enough for
all but the most insane of floating point calculations.

I think your choices there are limited to 32, 64, 80, 128

https://gcc.gnu.org/onlinedocs/gcc/Floating-Types.html

It includes the most common transcendental functions as well.

Quad floating precision runs slowly so do as much as you can at a lower
precision and then refine the answer using that as a seed value.

I used to like having 80 bit reals available in the good old prehistoric
days of MSC v6. Today it requires some effort to use them with MSC :(

In the old days, only VAX/VMS had hardware support for 128-bit floats
(not IEEE format though). In the cited GCC list, which of these are
directly supported in hardware, versus software emulation?

32, 64 are native x87 and full SSE floating point support
80 x87 only but GCC does it fairly well
Power Basic has a native 80-bit float type and a 64-bit integer.

Which corresponds to floating point types supported in the early Intel chips.

--

Rick C.

- Get 1,000 miles of free Supercharging
- Tesla referral code - https://ts.la/richard11209
 
On 15/01/2022 17:58, jlarkin@highlandsniptechnology.com wrote:
On Sat, 15 Jan 2022 17:50:14 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 14/01/2022 21:01, Joe Gwinn wrote:

In the old days, only VAX/VMS had hardware support for 128-bit floats

Probably a very wasteful decision that cost them dear. The requirement
for anything above a 64 bit FP word length is very esoteric.

The most popular back in that era for high speed floating point was the
Cyber 7600 (60 bit word) which powered Manchester universities Jodrell
Bank processing and BMEWS early warning system amongst other things.

(not IEEE format though). In the cited GCC list, which of these are
directly supported in hardware, versus software emulation?

32, 64 are native x87 and full SSE floating point support
80 x87 only but GCC does it fairly well

Power Basic has a native 80-bit float type and a 64-bit integer.

I\'m so impressed. NOT

MS C used to have it back in the old v6 days but they rationalised
things to only have 64 bit FP support in C/C++ a very long time ago.

Most decent compilers *do* offer 80 bit reals. It is a pity that
Mickeysoft don\'t because their code optimiser is streets ahead of both
Intel and GCC\'s at handling out of order execution parallelism.

Intel C and GCC compilers still support 80 bit floating point.

On the code I have been testing recently Intel generates code that
effectively *forces* a pipeline stall more often than not. MSC somehow
manages the opposite. Pipeline stalls cost around 90 cycles which is not
insignificant in a routine that should take 300 cycles.

Putting two divides close together with the second one dependent on the
result of the other is one way to do it. MSC tries much harder to
utilise the cycles where the divide hardware isn\'t ready to answer.
(at least it does when you enable every possible speed optimisation)

Sometimes it generates loop unrolled code that is completely wrong too :(

--
Regards,
Martin Brown
 
On 1/16/2022 2:26 AM, Martin Brown wrote:
On 15/01/2022 17:58, jlarkin@highlandsniptechnology.com wrote:
On Sat, 15 Jan 2022 17:50:14 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 14/01/2022 21:01, Joe Gwinn wrote:

In the old days, only VAX/VMS had hardware support for 128-bit floats

Probably a very wasteful decision that cost them dear. The requirement for
anything above a 64 bit FP word length is very esoteric.

The most popular back in that era for high speed floating point was the Cyber
7600 (60 bit word) which powered Manchester universities Jodrell Bank
processing and BMEWS early warning system amongst other things.

IIRC, the 645 had support for a 71 bit mantissa (?) ca 1970.

OTOH, it\'s not like there were many units built! (\"popular\") :>

A shame that so many (all?) of those early machines turned out
to be (mostly) evolutionary dead-ends. Imagine what things
would be like had we started *there* instead of from Intel\'s
misgivings...
 
On 16/01/2022 10:26, Martin Brown wrote:
On 15/01/2022 17:58, jlarkin@highlandsniptechnology.com wrote:
On Sat, 15 Jan 2022 17:50:14 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 14/01/2022 21:01, Joe Gwinn wrote:

In the old days, only VAX/VMS had hardware support for 128-bit floats

Probably a very wasteful decision that cost them dear. The requirement
for anything above a 64 bit FP word length is very esoteric.

And thus most VAX processors emulated it in software - only a few had
hardware support. (It is not unlikely that software emulation was
faster than hardware for some tasks - hardware floating point used to be
very slow for anything other than add, subtract and multiply.)

It\'s rare that 128-bit floats are useful - if you need more than 64-bit,
you probably need more than 128-bit and would be looking at arbitrary
precision floating point libraries.

Sometimes it generates loop unrolled code that is completely wrong too :(

It\'s easy to generate fast code if it doesn\'t have to be correct!
 
On Sun, 16 Jan 2022 09:26:42 +0000, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 15/01/2022 17:58, jlarkin@highlandsniptechnology.com wrote:
On Sat, 15 Jan 2022 17:50:14 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 14/01/2022 21:01, Joe Gwinn wrote:

In the old days, only VAX/VMS had hardware support for 128-bit floats

Probably a very wasteful decision that cost them dear. The requirement
for anything above a 64 bit FP word length is very esoteric.

The most popular back in that era for high speed floating point was the
Cyber 7600 (60 bit word) which powered Manchester universities Jodrell
Bank processing and BMEWS early warning system amongst other things.

(not IEEE format though). In the cited GCC list, which of these are
directly supported in hardware, versus software emulation?

32, 64 are native x87 and full SSE floating point support
80 x87 only but GCC does it fairly well

Power Basic has a native 80-bit float type and a 64-bit integer.

I\'m so impressed. NOT

Of course not. The word BASIC triggers too much emotion, facts not
required.

We had a couple of cases where we wanted to do a signal processing
routine that processed an array of adc samples, on x86. My official
programmer guys did it in gcc and I did it in Power Basic. Mine used
subscripts in the most obvious loop and they used pointers. Mine ran
4x as fast. After a day of mucking with code and compiler switches,
many combinations, they got within about 40%.

Python looks a lot like Basic to me. Some of the goofier features were
added so that it couldn\'t be directly accused of being Basic syntax,
which would have been toxic.

PB has wonderful string functions. It has TCP OPEN and such, and can
send/receive emails if you really want to. The cool stuff is native,
not libraries; make an EXE file in half a second and you\'re done.

We wrote MAX, our material control/BOM program, in PowerBasic. It\'s
great. We couldn\'t find any commercial packages that actually
understand electronics manufacturing.





--

I yam what I yam - Popeye
 
On Sun, 16 Jan 2022 03:08:06 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

On 1/16/2022 2:26 AM, Martin Brown wrote:
On 15/01/2022 17:58, jlarkin@highlandsniptechnology.com wrote:
On Sat, 15 Jan 2022 17:50:14 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 14/01/2022 21:01, Joe Gwinn wrote:

In the old days, only VAX/VMS had hardware support for 128-bit floats

Probably a very wasteful decision that cost them dear. The requirement for
anything above a 64 bit FP word length is very esoteric.

The most popular back in that era for high speed floating point was the Cyber
7600 (60 bit word) which powered Manchester universities Jodrell Bank
processing and BMEWS early warning system amongst other things.

IIRC, the 645 had support for a 71 bit mantissa (?) ca 1970.

OTOH, it\'s not like there were many units built! (\"popular\") :

A shame that so many (all?) of those early machines turned out
to be (mostly) evolutionary dead-ends. Imagine what things
would be like had we started *there* instead of from Intel\'s
misgivings...

IBMs decision to go Microsoft+Intel was tragic.



--

I yam what I yam - Popeye
 
On 16/01/2022 16:52, jlarkin@highlandsniptechnology.com wrote:
On Sun, 16 Jan 2022 03:08:06 -0700, Don Y

A shame that so many (all?) of those early machines turned out
to be (mostly) evolutionary dead-ends. Imagine what things
would be like had we started *there* instead of from Intel\'s
misgivings...

IBMs decision to go Microsoft+Intel was tragic.

Agreed.

The current crop of x86 processors are fantastic engineering to get high
throughputs despite the terrible ISA and often massively inefficient
code written for them. They are fine demonstrations that you really
/can/ polish a turd.
 
On 16/01/2022 16:50, jlarkin@highlandsniptechnology.com wrote:
On Sun, 16 Jan 2022 09:26:42 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 15/01/2022 17:58, jlarkin@highlandsniptechnology.com wrote:
On Sat, 15 Jan 2022 17:50:14 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 14/01/2022 21:01, Joe Gwinn wrote:

In the old days, only VAX/VMS had hardware support for 128-bit floats

Probably a very wasteful decision that cost them dear. The requirement
for anything above a 64 bit FP word length is very esoteric.

The most popular back in that era for high speed floating point was the
Cyber 7600 (60 bit word) which powered Manchester universities Jodrell
Bank processing and BMEWS early warning system amongst other things.

(not IEEE format though). In the cited GCC list, which of these are
directly supported in hardware, versus software emulation?

32, 64 are native x87 and full SSE floating point support
80 x87 only but GCC does it fairly well

Power Basic has a native 80-bit float type and a 64-bit integer.

I\'m so impressed. NOT

Of course not. The word BASIC triggers too much emotion, facts not
required.

I suspect he simply means it is not a hard or exciting feature if you
are making a language designed purely to run on a single target
processor family and OS. It is not impressive that Power BASIC has
support for 80-bit floats. It /would/ be impressive if it supported
128-bit floats, because that would require a lot of development effort.

BASIC is okay for small and simple programs. It is not uncommon to need
a something quick and easy - you want a language and tool that has
minimum developer time overhead, is interpreted (to minimise the
edit/run cycle time), has string handling, automatic memory management,
garbage collection (or even just keep all memory until the program
ends), and is easy to understand and write even for people who don\'t do
much coding. I personally don\'t see BASIC as a bad choice for that -
though I do think Python is usually a better choice these days.

On the other hand, trying to write /big/ systems in BASIC is an exercise
in madness. Pick the right tool for the job.
 
søndag den 16. januar 2022 kl. 17.23.19 UTC+1 skrev David Brown:
On 16/01/2022 16:50, jla...@highlandsniptechnology.com wrote:
On Sun, 16 Jan 2022 09:26:42 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 15/01/2022 17:58, jla...@highlandsniptechnology.com wrote:
On Sat, 15 Jan 2022 17:50:14 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 14/01/2022 21:01, Joe Gwinn wrote:

In the old days, only VAX/VMS had hardware support for 128-bit floats

Probably a very wasteful decision that cost them dear. The requirement
for anything above a 64 bit FP word length is very esoteric.

The most popular back in that era for high speed floating point was the
Cyber 7600 (60 bit word) which powered Manchester universities Jodrell
Bank processing and BMEWS early warning system amongst other things.

(not IEEE format though). In the cited GCC list, which of these are
directly supported in hardware, versus software emulation?

32, 64 are native x87 and full SSE floating point support
80 x87 only but GCC does it fairly well

Power Basic has a native 80-bit float type and a 64-bit integer.

I\'m so impressed. NOT

Of course not. The word BASIC triggers too much emotion, facts not
required.

I suspect he simply means it is not a hard or exciting feature if you
are making a language designed purely to run on a single target
processor family and OS. It is not impressive that Power BASIC has
support for 80-bit floats. It /would/ be impressive if it supported
128-bit floats, because that would require a lot of development effort.

BASIC is okay for small and simple programs. It is not uncommon to need
a something quick and easy - you want a language and tool that has
minimum developer time overhead, is interpreted (to minimise the
edit/run cycle time),

isn\'t Power BASIC compiled?
 
søndag den 16. januar 2022 kl. 11.45.44 UTC+1 skrev David Brown:
On 16/01/2022 10:26, Martin Brown wrote:
On 15/01/2022 17:58, jla...@highlandsniptechnology.com wrote:
On Sat, 15 Jan 2022 17:50:14 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 14/01/2022 21:01, Joe Gwinn wrote:

In the old days, only VAX/VMS had hardware support for 128-bit floats

Probably a very wasteful decision that cost them dear. The requirement
for anything above a 64 bit FP word length is very esoteric.

And thus most VAX processors emulated it in software - only a few had
hardware support. (It is not unlikely that software emulation was
faster than hardware for some tasks - hardware floating point used to be
very slow for anything other than add, subtract and multiply.)

that leaves division which was and still is \"slow\" compared to +/-/*
 
Martin Brown wrote:
On 15/01/2022 17:58, jlarkin@highlandsniptechnology.com wrote:
On Sat, 15 Jan 2022 17:50:14 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 14/01/2022 21:01, Joe Gwinn wrote:

In the old days, only VAX/VMS had hardware support for 128-bit floats

Probably a very wasteful decision that cost them dear. The requirement
for anything above a 64 bit FP word length is very esoteric.

The most popular back in that era for high speed floating point was the
Cyber 7600 (60 bit word) which powered Manchester universities Jodrell
Bank processing and BMEWS early warning system amongst other things.

(not IEEE format though).  In the cited GCC list, which of these are
directly supported in hardware, versus software emulation?

32, 64  are native x87 and full SSE floating point support
80     x87 only but GCC does it fairly well

Power Basic has a native 80-bit float type and a 64-bit integer.

I\'m so impressed. NOT

MS C used to have it back in the old v6 days but they rationalised
things to only have 64 bit FP support in C/C++ a very long time ago.

Most decent compilers *do* offer 80 bit reals. It is a pity that
Mickeysoft don\'t because their code optimiser is streets ahead of both
Intel and GCC\'s at handling out of order execution parallelism.

Last time I compared them directly was 2006ish, using almost all
single-precision C++ code. Back then, Intel was streets ahead of
Microsoft for vectorization and loop unrolling, and gcc was a distant
distant third.

Intel C and GCC compilers still support 80 bit floating point.

On the code I have been testing recently Intel generates code that
effectively *forces* a pipeline stall more often than not. MSC somehow
manages the opposite. Pipeline stalls cost around 90 cycles which is not
insignificant in a routine that should take 300 cycles.

What sorts of code are you comparing?
Putting two divides close together with the second one dependent on the
result of the other is one way to do it. MSC tries much harder to
utilise the cycles where the divide hardware isn\'t ready to answer.
(at least it does when you enable every possible speed optimisation)

Sometimes it generates loop unrolled code that is completely wrong too :(

Yikes.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 17/1/22 3:15 am, David Brown wrote:
On 16/01/2022 16:52, jlarkin@highlandsniptechnology.com wrote:
On Sun, 16 Jan 2022 03:08:06 -0700, Don Y

A shame that so many (all?) of those early machines turned out
to be (mostly) evolutionary dead-ends. Imagine what things
would be like had we started *there* instead of from Intel\'s
misgivings...

IBMs decision to go Microsoft+Intel was tragic.


Agreed.

The current crop of x86 processors are fantastic engineering to get high
throughputs despite the terrible ISA and often massively inefficient
code written for them. They are fine demonstrations that you really
/can/ polish a turd.
Only if you have the kind of funding that relies on a brutal monopoly of
a major industry that has almost every other industry by the
short-and-curly\'s.

Think how much amazing technology could have been produced by the same
level of investment in an open market. The waste of talent is nothing
short of tragic, on a global scale.

CH
 

Welcome to EDABoard.com

Sponsor

Back
Top