Create an account

Very important

  • To access the important data of the forums, you must be active in each forum and especially in the leaks and database leaks section, send data and after sending the data and activity, data and important content will be opened and visible for you.
  • You will only see chat messages from people who are at or below your level.
  • More than 500,000 database leaks and millions of account leaks are waiting for you, so access and view with more activity.
  • Many important data are inactive and inaccessible for you, so open them with activity. (This will be done automatically)


Thread Rating:
  • 668 Vote(s) - 3.6 Average
  • 1
  • 2
  • 3
  • 4
  • 5
A fun note about ARM

#1
So, I was bragging about the power of ARM in the discord today, and I figured I'd share something with you. Here's a simple snippet of x86 code

Hidden Content
You must

[To see links please register here]

or

[To see links please register here]

to view this content.


and the equivalent for ARM

Hidden Content
You must

[To see links please register here]

or

[To see links please register here]

to view this content.


Now, these two are completely identical when it comes to what they do, but they are worlds apart when it comes to resources consumed. We'll start with the x86 one.
Clock cycles:

Hidden Content
You must

[To see links please register here]

or

[To see links please register here]

to view this content.

The total one-time pass through this program (that is starting from .top until we get back to .top is 6 clock cycles, and it takes up 9 bytes in memory.

Now, with ARM this is simple. Every instruction takes 1 cycle and takes up 4 bytes. So, total time is 2 clock cycles and takes up 8 bytes in memory.

This means, that the ARM program will run in 4 fewer CPU cycles, using 1 byte less memory. Now, this being a small example the power of ARM isn't embodied much in the memory (I'm actually impressed its fewer at all for this program), but it's a massive difference in run time.

Scenario: an x86 CPU running at 1Hz and an ARM CPU running at 1Hz
Time needed for x86 cpu to complete one loop: 6 seconds
Time needed for ARM cpu to complete one loop: 2 seconds
Now that's a massive difference, but sure, the frequencies are ultra low. Let's use a different example:

x86 CPU is running at 3GHz
ARM CPU is running at 1GHz
Run the loop 10,000 times. Which CPU will finish first?



The answer: they will TIE.
This shows the massive difference. The amount of work you can do with a single core 3GHz x86 CPU is THE EXACT SAME amount of work you can do with a single core 1GHz ARM CPU (in this example, ARM takes a bigger lead as the program gets more complex). Now think about it. Your phone probably has a 4+2 ARM CPU, meaning that it has 6 cores total, 4 probably running at 1.4GHz (probably faster though), and 2 running at half of that (700MHz). In comparison, you might have a desktop PC with a 3.5GHz quad core processor, which is an x86 one. If you were to run all of your software, OS, games, whatever on the CPU in your phone (assuming they were written for ARM, and not just ported), then it would be slightly (7 - 15%) FASTER than your desktop PC.

This means, that your phone has MORE computing power than your "high end" PC. It's an interesting thought to think about.
Reply

#2
x86 is a pretty terribly designed architecture...
RISC has always been considered to be better than CISC, even if x86 uses RISC microcode, it still doesn't reach ARM's ability.
Reply

#3
Quote:(03-15-2018, 10:05 PM)Ender Wrote:

[To see links please register here]

x86 is a pretty terribly designed architecture...
RISC has always been considered to be better than CISC, even if x86 uses RISC microcode, it still doesn't reach ARM's ability.

Exactly. I was really trying to bring people to the question "then why don't we have RISC desktops?". It's not really a huge mystery, we did have RISC desktops in the 90s, but IBM wouldn't raise the clock rate when Apple wanted to, and Intel pushed to win the *ISC wars, and when Apple finally left AIM, they did.

Also a useful note: even though that instruction I used looks complicated or obscure, it's not. It's a simple MLA (multiply accumulate) instruction, with some options added. The S on the end is an extremely useful feature of ARM, called the S-bit. You can add an S to the end of almost every instruction that does work, and what it does is the equivalent of a TEST when executing the instruction. It's amazingly useful.
Reply

#4
Quote:(03-15-2018, 10:09 PM)phyrrus9 Wrote:

[To see links please register here]

Quote: (03-15-2018, 10:05 PM)Ender Wrote:

[To see links please register here]

x86 is a pretty terribly designed architecture...
RISC has always been considered to be better than CISC, even if x86 uses RISC microcode, it still doesn't reach ARM's ability.

Exactly. I was really trying to bring people to the question "then why don't we have RISC desktops?". It's not really a huge mystery, we did have RISC desktops in the 90s, but IBM wouldn't raise the clock rate when Apple wanted to, and Intel pushed to win the *ISC wars, and when Apple finally left AIM, they did.

Also a useful note: even though that instruction I used looks complicated or obscure, it's not. It's a simple MLA (multiply accumulate) instruction, with some options added. The S on the end is an extremely useful feature of ARM, called the S-bit. You can add an S to the end of almost every instruction that does work, and what it does is the equivalent of a TEST when executing the instruction. It's amazingly useful.

Would be nicer to have something like MLA_S, MLA:S, S:MLA or something. Gets confusing without something in between
Reply

#5
Quote:(03-15-2018, 10:13 PM)Ender Wrote:

[To see links please register here]

Quote: (03-15-2018, 10:09 PM)phyrrus9 Wrote:

[To see links please register here]

Quote: (03-15-2018, 10:05 PM)Ender Wrote:

[To see links please register here]

x86 is a pretty terribly designed architecture...
RISC has always been considered to be better than CISC, even if x86 uses RISC microcode, it still doesn't reach ARM's ability.

Exactly. I was really trying to bring people to the question "then why don't we have RISC desktops?". It's not really a huge mystery, we did have RISC desktops in the 90s, but IBM wouldn't raise the clock rate when Apple wanted to, and Intel pushed to win the *ISC wars, and when Apple finally left AIM, they did.

Also a useful note: even though that instruction I used looks complicated or obscure, it's not. It's a simple MLA (multiply accumulate) instruction, with some options added. The S on the end is an extremely useful feature of ARM, called the S-bit. You can add an S to the end of almost every instruction that does work, and what it does is the equivalent of a TEST when executing the instruction. It's amazingly useful.

Would be nicer to have something like MLA_S, MLA:S, S:MLA or something. Gets confusing without something in between

Some assemblers do a dot between the opcode and the condition. Example:

Hidden Content
You must

[To see links please register here]

or

[To see links please register here]

to view this content.

but the official one doesn't. All ARM instructions are 3 characters, the S always means S-bit (if 4th letter), and conditions are always 2 letters. Meaning max length is 5

Hidden Content
You must

[To see links please register here]

or

[To see links please register here]

to view this content.

easy to break that down into

Hidden Content
You must

[To see links please register here]

or

[To see links please register here]

to view this content.

Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

©0Day  2016 - 2023 | All Rights Reserved.  Made with    for the community. Connected through