The community will undergo maintenance on October 16th at 10:00 PM PDT and will be unavailable for up to one hour.
Hi all,
We have a test environment that will become the production server when all done. I am trying to get ready for the production upgrade. I wanted to get a number for how long >robocopy would take to copy the <wt_home_old-server>\vaults\defaultcachevault to <wt_home_new-server>\vaults\defaultcachevault.
It failed with the following error: "ERROR 64 (0x00000040) Scanning Source Directory \\server\d$\ptc\windchill\windchill_8.0\vaults\defaultcachevault\
The specified network name is no longer available."
We do have a problem in that the vaults were never broken up into more manageable size folders. Defaultcachevault is currently a single directory with 1,323,??? files, 75.5G. I think robocopy is failing due to this huge file count, I know Windoze doesn't like it. (We will deal with this after the rehost and upgrade, not before.)
My test of robocopy to transfer the latest Oracle dump and Aphelion export worked fine.
>>Does anyone have any suggestion of changes that could be made to my robocopy syntax that could help or other tools that might work? The server OS is Windoze Server 2003 Rx, Std x64 Edition, SP 2. Gigabit ethernet. Here is my command syntax:
robocopy "\\server\d$\ptc\windchill\windchill_8.0\vaults\defaultcachevault" "d:\ptc\robo-test\vaults\defaultcachevault" /R:2 /W:10 /MIR /NFL /LOG+:d:\ptc\robo-log.log
The only other strategy that I have is to use DFS replication from the production server to a directory outside of my <wt_home> on the target server. Allow this to run for 10 days or so, until DFS reporting says we are complete and up to date on new and changed objects.
When I actually go to start the live upgrade, delete the <wt_home>\vaults\defaultcachevault folder. Use >move to "relocate" my new defaultcachevault directory into position. The move command should execute quickly since the directory populated by DFS replicatoni will be on the same server and drive as the Windchill installation.
Does any one have any thoughts on why this DFS replication/move idea won't work? Any other suggestions for managing the copy of over size directories between servers?
Thank you,
Tom
Tom,
Your network error is “ERROR_NETNAME_DELETED” “The specified network name is no longer available”: http://msdn.microsoft.com/en-us/library/windows/desktop/ms681382(v=vs.85).aspx
This is a relatively generic error with many possible causes and it will happen with all copy utilities: robocopy, xcopy, etc. There are some things you can try:
DFS Replication is fine if you have a couple of weeks to wait for the synchronization. There are faster alternatives to file copies that have worked quickly for me.
Kind Regards,
Matt Meadows
Solutions Architect
VIRSO Inc
O: 618 937 8115
C: 314 749 8377
E: mmeadows@virsoinc.com
Thank you all for the replies and ideas.
Hi Matt, I had not added the IP to the host file before, but it is in now. Before adding the statement, I did run a ping by name on the source server from my destination server to get the IP. It did resolve the name and all replies were under 1ms. I think this was resolving fine, the similar structured >robocopy for the latest Oracle dump from the source server completed w/o fail. These two statements were in a batch file to run back to back.
Even though a reply in another thread on another site stated robocopy doesn't do any indexing up front, I still think there is some corelation there. I don't know.
I've not received a reply from PTC on any issues that they can think of regarding the DFS replication method. The support engineer was going to make enquiries to others to see if they could think of any problems with my DFS method. He did suggest maybe using FTP to transfer the vault between the two servers. I know this will take a while up front, but maybe the trick would be to combine using FTP to get the bulk of the vault data moved then set up the DFS replication to keep it up to date and synchronized until the go live. I would just like a method that will preload the target with the bulk of the junk required. Then when production run does come i only have to create a new Oracle dump, Aphelion export and copy those two things.
I like the idea of ftp, I'd not even thought about that - talk about being stuck in a rut! He also suggested other tools suchas teracopy. The cloning won't work, not VMs or cloud.
As far as I can tell the virus is turned off, but I was unable to actually get IT to help verify that and I'm not all powerful when it comes to some things! I'd also not thought about the restore from tape, but getting the assistance makes the ftp look better.
One way or another, I'll get the data over there - just how easily and how much time, but those are often the questions!
Have a great weekend,
Tom
Tom,
Did you try using robocopy's file selection options /MAXAGE:n and /MINAGE:n
These options could be used to limit the number of files copied. If doing this with a batch file you might be able to include a loop to increment the value of n until all files are copied.
David