Hardware (Read Only)
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

workstations vs. others

11 REPLIES 11
Reply
Message 1 of 12
Anonymous
313 Views, 11 Replies

workstations vs. others

Can someone tell me in a nutshell the difference between a normal pc and a workstation? I've been told it's simply the grade of components within the pc and others have said there is a definite difference in the chipset, maybe the architecture of the unit. I would appreciate any info here.

Thank you

John
11 REPLIES 11
Message 2 of 12
Anonymous
in reply to: Anonymous

There is no official definition of a workstation but there are multiple
factors when choosing components for a workstation. These are some of
them, but others may modify the list.

1) Precision graphic rendering vs. speed. A workstation will
theoretically produce high accuracy renderings by not taking the
shortcuts that would be considered acceptable to enhance the speed in
games. The rendering better be right, and walking through walls is
unacceptable.

2) High reliability components since a computer crash will be costly in
lost man-hours as well as inconvenient. This is especially if work was
not yet backed up.

3) Multi-cpu capable since many workstation programs can use multiple
cpu's. This will mean Xeons or Opterons.

4) Error correcting memory, and lots of it. If a memory error is
undetected in a game, nobody really dies, but if an airplane is built
based on work from a machine that has a tendency to drop digits, the
plane just might crash. The cost of memory in a work environment is
comparatively minor if it speeds up the engineer by 1%. One percent of a
$100,000 a year employee's salary is $1000 and that is more than the
price differential for the extra higher class memory. If the employee is
paid less, then the time to recover the investment is longer, but it
still pays to spend the money.

5) Redundant Raid hard drives. (Heaven forbid that the drive crashes and
you lose data)

John0070 wrote:
> Can someone tell me in a nutshell the difference between a normal pc and a workstation? I've been told it's simply the grade of components within the pc and others have said there is a definite difference in the chipset, maybe the architecture of the unit. I would appreciate any info here.
>
> Thank you
>
> John
Message 3 of 12
Anonymous
in reply to: Anonymous

On Wed, 25 Apr 2007 18:22:50 +0000, Jerry G wrote:

>There is no official definition of a workstation but there are multiple
>factors when choosing components for a workstation. These are some of
>them, but others may modify the list.

What Jerry has is essentially correct. However, with today's high-power dual
core CPUs, we have a definite blurring of the division between run of the mill
desktops and Workstations. This is mostly because of the gaming market, which
pushed desktops to workstation level performance.

The main difference you will see is whether you go with a Dual Core chip (Core 2
Duo, Athlon X2) or actual dual CPUs, each of which is a Dual Core CPU, giving
you a potential of 4 CPUs. Only a small handful of apps are built to take solid
advantage of more than twio CPUs - 3ds max and VIZ are two that will.

As Jerry mentioned, ECC RAM is the other criteria reserved for Workstations.
Along with this is an advanced chipset that actually supports ECC RAM -
mainstream off the shelf motherboards usually do not. Any machine that supports
dual Xeon or Opterons will support ECC RAM. These class of motherboards - which
might cost $500 by itself - also have highly engineered components and won't
blow a capacitor like an off the shelf board might.

Workstations will also support much more RAM than a normal mobo as well, which
usually top out at 2 or 4 slots. A Workstation motherboard may have 6 or 8
memory slots.

True workstations always use high-end OpenGL graphics cards that have certified
application specific drivers for graphics programs like max/VIZ, Maya,
Lightwave, and so on. The difference in just using the driver on a decently
polygon-heavy Max model is phenominal.

RAID arrays - maybe. in the past, SCSI drived reigned king for workstations
because they had higher throughput (great for large files), but that's been
eclipsed by SATA drives and the fact that most folks work off of a server
instead of local data.

Matt
mstachoni@comcast.net
mstachoni@bhhtait.com


>1) Precision graphic rendering vs. speed. A workstation will
>theoretically produce high accuracy renderings by not taking the
>shortcuts that would be considered acceptable to enhance the speed in
>games. The rendering better be right, and walking through walls is
>unacceptable.
>
>2) High reliability components since a computer crash will be costly in
>lost man-hours as well as inconvenient. This is especially if work was
>not yet backed up.
>
>3) Multi-cpu capable since many workstation programs can use multiple
>cpu's. This will mean Xeons or Opterons.
>
>4) Error correcting memory, and lots of it. If a memory error is
>undetected in a game, nobody really dies, but if an airplane is built
>based on work from a machine that has a tendency to drop digits, the
>plane just might crash. The cost of memory in a work environment is
>comparatively minor if it speeds up the engineer by 1%. One percent of a
>$100,000 a year employee's salary is $1000 and that is more than the
>price differential for the extra higher class memory. If the employee is
>paid less, then the time to recover the investment is longer, but it
>still pays to spend the money.
>
>5) Redundant Raid hard drives. (Heaven forbid that the drive crashes and
>you lose data)
>
>John0070 wrote:
>> Can someone tell me in a nutshell the difference between a normal pc and a workstation? I've been told it's simply the grade of components within the pc and others have said there is a definite difference in the chipset, maybe the architecture of the unit. I would appreciate any info here.
>>
>> Thank you
>>
>> John
Message 4 of 12
Anonymous
in reply to: Anonymous

I appreciate your agreeing mostly with me. This is a topic that,
because of the lack of a hard and fast definition, can certainly bring
up a fair number of arguments.
My comment about RAID did not say SCSI arrays. Many of the newer
machines now support SATA RAID. My statement was about the use of
redundant arrays. In the workstation environment redundancy is important
due to the fact that lost data can be extremely expensive. Imagine that
the job that you've been working on for 3 weeks is lost due to a drive
crash. Even assuming that you can recreate the job from scratch, and
that it will only take you 50% of the time that it did the first time, I
have found, from personal experience, that the process of recreating a
job will accidentally lead to errors and omissions since you may
remember the work to design some aspect of the job, but your memory is
of the first time you did the work, and the new replacement is missing
that aspect. Also, when you have to do this replacement work, you are
usually rushing to make up for the lost time. In any case, in a work
environment, lost time is always expensive. If you are redoing lost work
you are losing the money that you could be earning by doing new work.


Matt Stachoni wrote:
> On Wed, 25 Apr 2007 18:22:50 +0000, Jerry G wrote:
>
>> There is no official definition of a workstation but there are multiple
>> factors when choosing components for a workstation. These are some of
>> them, but others may modify the list.
>
> What Jerry has is essentially correct. However, with today's high-power dual
> core CPUs, we have a definite blurring of the division between run of the mill
> desktops and Workstations. This is mostly because of the gaming market, which
> pushed desktops to workstation level performance.
>
> The main difference you will see is whether you go with a Dual Core chip (Core 2
> Duo, Athlon X2) or actual dual CPUs, each of which is a Dual Core CPU, giving
> you a potential of 4 CPUs. Only a small handful of apps are built to take solid
> advantage of more than twio CPUs - 3ds max and VIZ are two that will.
>
> As Jerry mentioned, ECC RAM is the other criteria reserved for Workstations.
> Along with this is an advanced chipset that actually supports ECC RAM -
> mainstream off the shelf motherboards usually do not. Any machine that supports
> dual Xeon or Opterons will support ECC RAM. These class of motherboards - which
> might cost $500 by itself - also have highly engineered components and won't
> blow a capacitor like an off the shelf board might.
>
> Workstations will also support much more RAM than a normal mobo as well, which
> usually top out at 2 or 4 slots. A Workstation motherboard may have 6 or 8
> memory slots.
>
> True workstations always use high-end OpenGL graphics cards that have certified
> application specific drivers for graphics programs like max/VIZ, Maya,
> Lightwave, and so on. The difference in just using the driver on a decently
> polygon-heavy Max model is phenominal.
>
> RAID arrays - maybe. in the past, SCSI drived reigned king for workstations
> because they had higher throughput (great for large files), but that's been
> eclipsed by SATA drives and the fact that most folks work off of a server
> instead of local data.
>
> Matt
> mstachoni@comcast.net
> mstachoni@bhhtait.com
>
>
>> 1) Precision graphic rendering vs. speed. A workstation will
>> theoretically produce high accuracy renderings by not taking the
>> shortcuts that would be considered acceptable to enhance the speed in
>> games. The rendering better be right, and walking through walls is
>> unacceptable.
>>
>> 2) High reliability components since a computer crash will be costly in
>> lost man-hours as well as inconvenient. This is especially if work was
>> not yet backed up.
>>
>> 3) Multi-cpu capable since many workstation programs can use multiple
>> cpu's. This will mean Xeons or Opterons.
>>
>> 4) Error correcting memory, and lots of it. If a memory error is
>> undetected in a game, nobody really dies, but if an airplane is built
>> based on work from a machine that has a tendency to drop digits, the
>> plane just might crash. The cost of memory in a work environment is
>> comparatively minor if it speeds up the engineer by 1%. One percent of a
>> $100,000 a year employee's salary is $1000 and that is more than the
>> price differential for the extra higher class memory. If the employee is
>> paid less, then the time to recover the investment is longer, but it
>> still pays to spend the money.
>>
>> 5) Redundant Raid hard drives. (Heaven forbid that the drive crashes and
>> you lose data)
>>
>> John0070 wrote:
>>> Can someone tell me in a nutshell the difference between a normal pc and a workstation? I've been told it's simply the grade of components within the pc and others have said there is a definite difference in the chipset, maybe the architecture of the unit. I would appreciate any info here.
>>>
>>> Thank you
>>>
>>> John
Message 5 of 12
Anonymous
in reply to: Anonymous

On Thu, 26 Apr 2007 12:08:07 +0000, Jerry G wrote:

>I appreciate your agreeing mostly with me. This is a topic that,
>because of the lack of a hard and fast definition, can certainly bring
>up a fair number of arguments.

Yeah, but they're usually stupid arguments :). Like anyone gets all excited
about this anymore.

> My comment about RAID did not say SCSI arrays. Many of the newer
>machines now support SATA RAID. My statement was about the use of
>redundant arrays. In the workstation environment redundancy is important
>due to the fact that lost data can be extremely expensive.

True, but it's just as expensive for any other kind of work using normal PCs -
doctors, lawyers, astronauts, and so on.

That's why we have servers and strong backup policies in place.

>Imagine that the job that you've been working on for 3 weeks is lost due to a drive
>crash.

For that to happen means that none of the files are on the server, where they
belong. If that actually happened, the person would be fired on the spot and for
good reason. That's an employee problem that no amount of redundancy will fix,
because of the person is truly dumb enough to do that I wouldn't trust their
design and CAD work either.

My point was that any kind of redundant array at the local PC level in a
corporate environment is pretty much wothless, because the only data you are
accessing is server-based and server-stored, redundant via RAID and backed up
religiously.

I consider local machines to be disposable from a data standpoint. If a
machine's HD fails, the only thing the IT dood should have to do is replace it
with a new HD and reinstall the OS and apps. The person should be able to log on
to any spare workstation and everything works fine. When the machine is fixed
the person logs on and everything works as it did before.

This requires a bit of planning and forethought, and pretty much requires
Windows Roaming Profiles to be truly effective. If the IT guy is on the ball,
they would have the systems be as closely configured as possible, and use disk
imaging or some other kind of automated system rebuild mechanism.

I lose hard drives as much as anyone else does, and have yet to lose a byte of
useful corporate data because of one.

Matt
mstachoni@comcast.net
mstachoni@bhhtait.com
Message 6 of 12
Anonymous
in reply to: Anonymous

Thanks guys. Only one other question concerning power supplies. Any thoughts there or does 'bigger is better' rule in this case?

Thanks

John
Message 7 of 12
Anonymous
in reply to: Anonymous

C'mon Matt, tell us how your REALLY feel! 🙂

Rick
"Matt Stachoni" wrote in message
news:5565911@discussion.autodesk.com...
On Thu, 26 Apr 2007 12:08:07 +0000, Jerry G wrote:

>I appreciate your agreeing mostly with me. This is a topic that,
>because of the lack of a hard and fast definition, can certainly bring
>up a fair number of arguments.

Yeah, but they're usually stupid arguments :). Like anyone gets all excited
about this anymore.

> My comment about RAID did not say SCSI arrays. Many of the newer
>machines now support SATA RAID. My statement was about the use of
>redundant arrays. In the workstation environment redundancy is important
>due to the fact that lost data can be extremely expensive.

True, but it's just as expensive for any other kind of work using normal
PCs -
doctors, lawyers, astronauts, and so on.

That's why we have servers and strong backup policies in place.

>Imagine that the job that you've been working on for 3 weeks is lost due to
>a drive
>crash.

For that to happen means that none of the files are on the server, where
they
belong. If that actually happened, the person would be fired on the spot and
for
good reason. That's an employee problem that no amount of redundancy will
fix,
because of the person is truly dumb enough to do that I wouldn't trust their
design and CAD work either.

My point was that any kind of redundant array at the local PC level in a
corporate environment is pretty much wothless, because the only data you are
accessing is server-based and server-stored, redundant via RAID and backed
up
religiously.

I consider local machines to be disposable from a data standpoint. If a
machine's HD fails, the only thing the IT dood should have to do is replace
it
with a new HD and reinstall the OS and apps. The person should be able to
log on
to any spare workstation and everything works fine. When the machine is
fixed
the person logs on and everything works as it did before.

This requires a bit of planning and forethought, and pretty much requires
Windows Roaming Profiles to be truly effective. If the IT guy is on the
ball,
they would have the systems be as closely configured as possible, and use
disk
imaging or some other kind of automated system rebuild mechanism.

I lose hard drives as much as anyone else does, and have yet to lose a byte
of
useful corporate data because of one.

Matt
mstachoni@comcast.net
mstachoni@bhhtait.com
Message 8 of 12
Anonymous
in reply to: Anonymous

On Thu, 26 Apr 2007 19:11:00 +0000, John0070 <> wrote:

>Thanks guys. Only one other question concerning power supplies. Any thoughts there or does 'bigger is better' rule in this case?

Bigger doesn't mean better. The internals are the important point and vary all
over the place.

If you are buying a "bought" system, then PSU is guaranteed to be fine.

If you are building a system from scratch, check out PC Magazine's latest issue
which has a PSU blowout review.

Matt
mstachoni@comcast.net
mstachoni@bhhtait.com
Message 9 of 12
Anonymous
in reply to: Anonymous

On Thu, 26 Apr 2007 19:29:33 +0000, RickGraham <> wrote:

>C'mon Matt, tell us how your REALLY feel! 🙂

Hey, I used to be a big proponent of RAID arrays, both SCSI and ATA. But I found
that for corporate environment it simply wasn't required. For home use / small
single PC user, I still don't think it's a big deal given how easy it is to back
stuff up to removable/external media. I've gone from a large RAID array at home
to multiple hard disks which partition data away from the OS + apps.

Matt
mstachoni@comcast.net
mstachoni@bhhtait.com
Message 10 of 12
Anonymous
in reply to: Anonymous

Thanks Matt.
Message 11 of 12
Anonymous
in reply to: Anonymous

The comments you make are certainly valid in a larger corporate
environment where there is an IT staff. But in the smaller workplace
where there are 3 or 4 designers at work and IT is left to 1 of those
workers when he has nothing else to do, where the networking is peer to
peer instead of domain based, and there is no master server (like my
workplace,) a redundant array becomes more important. On the other hand,
we don't use true workstations, but manage with big name mass market
boxes because they are so much cheaper.

Matt Stachoni wrote:
> On Thu, 26 Apr 2007 12:08:07 +0000, Jerry G wrote:
>
>> I appreciate your agreeing mostly with me. This is a topic that,
>> because of the lack of a hard and fast definition, can certainly bring
>> up a fair number of arguments.
>
> Yeah, but they're usually stupid arguments :). Like anyone gets all excited
> about this anymore.
>
>> My comment about RAID did not say SCSI arrays. Many of the newer
>> machines now support SATA RAID. My statement was about the use of
>> redundant arrays. In the workstation environment redundancy is important
>> due to the fact that lost data can be extremely expensive.
>
> True, but it's just as expensive for any other kind of work using normal PCs -
> doctors, lawyers, astronauts, and so on.
>
> That's why we have servers and strong backup policies in place.
>
>> Imagine that the job that you've been working on for 3 weeks is lost due to a drive
>> crash.
>
> For that to happen means that none of the files are on the server, where they
> belong. If that actually happened, the person would be fired on the spot and for
> good reason. That's an employee problem that no amount of redundancy will fix,
> because of the person is truly dumb enough to do that I wouldn't trust their
> design and CAD work either.
>
> My point was that any kind of redundant array at the local PC level in a
> corporate environment is pretty much wothless, because the only data you are
> accessing is server-based and server-stored, redundant via RAID and backed up
> religiously.
>
> I consider local machines to be disposable from a data standpoint. If a
> machine's HD fails, the only thing the IT dood should have to do is replace it
> with a new HD and reinstall the OS and apps. The person should be able to log on
> to any spare workstation and everything works fine. When the machine is fixed
> the person logs on and everything works as it did before.
>
> This requires a bit of planning and forethought, and pretty much requires
> Windows Roaming Profiles to be truly effective. If the IT guy is on the ball,
> they would have the systems be as closely configured as possible, and use disk
> imaging or some other kind of automated system rebuild mechanism.
>
> I lose hard drives as much as anyone else does, and have yet to lose a byte of
> useful corporate data because of one.
>
> Matt
> mstachoni@comcast.net
> mstachoni@bhhtait.com
Message 12 of 12
Anonymous
in reply to: Anonymous

I have a very similar situation since we only have three employees, but that
is not an excuse for backing up data. The more critical issue for us is
having the only computer with LDT on it crash while on a deadline. That is
unacceptable so reliability of the computers is paramount.

Speaking of which, I going to the store to buy a new UPS because we lost a
backup when the router went down due to a power outage a couple of days ago!
I forgot that saving to the network is not possible during a power outage if
the router can't run. For the last few days, there has been an extension
cord across the floor from a workstation's ups to the router.

Brad

"Jerry G" wrote in message
news:5566661@discussion.autodesk.com...
The comments you make are certainly valid in a larger corporate
environment where there is an IT staff. But in the smaller workplace
where there are 3 or 4 designers at work and IT is left to 1 of those
workers when he has nothing else to do, where the networking is peer to
peer instead of domain based, and there is no master server (like my
workplace,) a redundant array becomes more important. On the other hand,
we don't use true workstations, but manage with big name mass market
boxes because they are so much cheaper.

Matt Stachoni wrote:
> On Thu, 26 Apr 2007 12:08:07 +0000, Jerry G wrote:
>
>> I appreciate your agreeing mostly with me. This is a topic that,
>> because of the lack of a hard and fast definition, can certainly bring
>> up a fair number of arguments.
>
> Yeah, but they're usually stupid arguments :). Like anyone gets all
> excited
> about this anymore.
>
>> My comment about RAID did not say SCSI arrays. Many of the newer
>> machines now support SATA RAID. My statement was about the use of
>> redundant arrays. In the workstation environment redundancy is important
>> due to the fact that lost data can be extremely expensive.
>
> True, but it's just as expensive for any other kind of work using normal
> PCs -
> doctors, lawyers, astronauts, and so on.
>
> That's why we have servers and strong backup policies in place.
>
>> Imagine that the job that you've been working on for 3 weeks is lost due
>> to a drive
>> crash.
>
> For that to happen means that none of the files are on the server, where
> they
> belong. If that actually happened, the person would be fired on the spot
> and for
> good reason. That's an employee problem that no amount of redundancy will
> fix,
> because of the person is truly dumb enough to do that I wouldn't trust
> their
> design and CAD work either.
>
> My point was that any kind of redundant array at the local PC level in a
> corporate environment is pretty much wothless, because the only data you
> are
> accessing is server-based and server-stored, redundant via RAID and backed
> up
> religiously.
>
> I consider local machines to be disposable from a data standpoint. If a
> machine's HD fails, the only thing the IT dood should have to do is
> replace it
> with a new HD and reinstall the OS and apps. The person should be able to
> log on
> to any spare workstation and everything works fine. When the machine is
> fixed
> the person logs on and everything works as it did before.
>
> This requires a bit of planning and forethought, and pretty much requires
> Windows Roaming Profiles to be truly effective. If the IT guy is on the
> ball,
> they would have the systems be as closely configured as possible, and use
> disk
> imaging or some other kind of automated system rebuild mechanism.
>
> I lose hard drives as much as anyone else does, and have yet to lose a
> byte of
> useful corporate data because of one.
>
> Matt
> mstachoni@comcast.net
> mstachoni@bhhtait.com

Can't find what you're looking for? Ask the community or share your knowledge.

Post to forums  

Autodesk Design & Make Report