cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Keep It Litterally In The Box with Thin CAD Clients

Keep It Litterally In The Box with Thin CAD Clients

Requirement

Based on the current conditions of:

  • poor stability in both our LAN and WAN
  • IT cost and expertise to both support and scalable
    • synchronize backups (multi-masters)
    • freezing/crashing of CAD clients
  • PTC issues with support and performance of:
    • clusters
    • replication (single or multi-master)

Solution

Single master vault with thin/zero clients and graphics accelerators is the best option.

This monolithic physical solution requires:

  • 2X 16 blade chassis with

o   3 physical application servers for 1 production Windchill, 1 production CAD workstation and VM host for test and development

2X Intel 6/8 core 3.0 or 2.7 GHz CPU (possible 12 foreground + 3 background methodservers)

2X 96 GIG or 128 GIG RAM

2X 10GIGE

2X 8GIG fiber for SAN

OS Linux 6.# (ability to have AuFS)

o   1 physical database server

2X Intel quad core 3.0 GHz CPU

2X 48 GIG RAM

2X 10GIGE

2X 8GIG fiber for SAN

Oracle Database Server, (my preference is not to have SQL Server based on poor performance and rebooting)

o   28 Workstation Blades (supports 224 concurrent users/8 users per blade), each having

2X Intel 8 core 2.7 GHz CPU

2X 64 GIG RAM

1X 10GIGE

1X 8GIG fiber for SAN

2 Graphics Cards.

o   Thin client for new systems or USB clients for existing remote desktops.

C7000.PNG

Pro's and Con's

There are both positive (Pro’s) and negative aspects (Con’s/Risks) of the thin client with single system and no replication:

o   Pro’s:

    • Thin clients have no issues with VPN or MPLS at 100 m/s latency. Only PCoIP connections from clients to their blade workstations.  Zero content and metadata is transferred to user
    • Best Windchill application performance:
      • No overhead of replication/clustering overhead of queues, file transfers and cache synchronization
      • No poorly performing synchronation software of rsync for clusters
    • Best ProE session performance with Windchill Application:
      • No ProE sessions disconnection to the server with redundant 10GIGE switch direct to the Windchill application server.  All ProE Workstations and Windchill system completely isolated from both LAN and WAN issues
      • Blade workstations directly connected to server at 10GIGE.  Bi-passes LAN, WAN and server room core (if on the same chassis).  All contained within the chassis and EVA.
      • If there is a complete network failure in the LAN, all users sessions remain intact and continually running.
      • MUCH, MUCH Less issues in WAN with packet losses and latency
      • new accelerated graphics capabilities with thin clients with PCoIP caching.  (wow pixel caching)
    • Least complex and support required:
      • No local ProE installs
      • easy to train for support
      • No replicated servers
        • Single point of backup of Windchill with 1 storage appliance
        • Only 6 user profiles to administer and 28 workstation blades to be used for possibly 224 concurrent CAD/CAM users.
        • Almost no dependence on domain controllers.
        • No limit to physical storage capacity with physical servers.  Unlimited mount points with Linux
        • Least amount of data storage capacity
          • No redundant local workspaces on each local physical machine
          • Single copy of Workspaces on SAN and always protected.
          • no file servers or duplicate slaves.

Con’s Risk

    • Need to test at 300 to 400 m/s latency

Common Issues with clusters and file servers

    • still dependent on performance of AD and DNS
    • If there is a complete network failure in server room all users are disconnected.
    • Single point of failure occurs to clusters and file servers for backgroud method server and database failures.  I would rather have a complete Windchill shutdown that losing a remote master file server that I cannot recover.
1 Comment
Community Manager
Status changed to: Archived