What's new
  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Shuttle KD20 some files missing once loaded after copy

Wooleyx64

New Around Here
Hello All,
I purchased a Shuttle KD20 as my first NAS a couple year ago. With it I also purchased a 4TB WD Red. A few months back I got to the point where I knew I would eventually need to expand the space out to 8TB total. I finally got around to getting abother 4TB WD Red. Unfortunately with this specific NAS ir order to modify the NAS you need to format and start from scratch to add a disk in JBOD mode.

What I did was I had about 2.6TB of data to backup. I connected a 3TB HDD to a PC running Ubuntu. I then used rsync to copy all the files from each parent folder on my NAS.

The NAS is setup so that you create main directories (ie: TV, Music, Games, etc)

Okay after I backed up all the 2.6TB of data from the NAS to the single HDD, I then installed the 2nd 4TB drive into the NAS and created the JBOD and that wiped all the data on the NAS.

It appears to have remembered that I had folders created previously and it did make the TV, Music, Games, etc folders on the new JBOD (now totaling 8TB available)

Getting the data back on the NAS quickly:

I took the 3TB HDD which has the data on it currently. I then took both 4TB HDDs out of the NAS and connected them all to the same Ubuntu PC. I used mdam to create the JBOD array on the Ubuntu PC. I was able to see the volume one completed 8TB disk.

I then used rsync again and recopied everything from the source 3TB drive into this huge volume.
I checked and double checked. In ubuntu I can see all of the files are there.

When I remount the 2 4 TB drives back into the Shuttle NAS, Most of the files are missing.

What steps do I need to take to resolve this?

Would it be better for me to wipe the 8TB volume and then tried to copy the files back overt he network instead of trying to directly connect the volume using Linux?

Thanks,
-Wooley
 
Update: I took the drives back out of the NAS and I connected them back up to the ubuntu box. I am able to verify that all of the files are visible when I am in ubuntu. Once I connect it back up again to the NAS, most of the files just go missing again.

I did notice that most of the files had root as the owner and ubuntu. I chmod the files to nobody just hoping that maybe it was a permission issue.

I do have ssh access to the Shuttle, but it is a very limited command set due to it not really being meant to work in this way.

Anyway, still the same issue...Copied filed over manually from one drive to another, but now the files/folders do not show up once they are connected to the NAS.

Thanks,
-Wooley
 
You're on the right track - it is a user/group permissions issue

You need to find out who the owner user/group is for the samba share on the NAS, and then you can chown/chgrp on the files/directories from there.

Keep in mind that the nobody user is a special user like root on most unix-like operating systems, so that's likely not a good choice to use.
 
I think I follow you,
When I first mounted the volume the second time I noticed with an ls -l I would see most of the folders stated root as owner and ubuntu as the group.

What appeared to be the original NAS folders one of them was named "disk" was set as root with a group of 99.

I am still a novice in Linux. I know enough to be dangerous and at the same time not really break too much.

From my understanding the nobody group = everyone in regards to windows permission. Is that a safe assumption?

The reason I had set it this way is due to an issue I had early on with the NAS.

I have the NAS shares setup with an account which is also password protected. However, I got the impression that this device does not write files using those permissions.

I had a case where I had placed about ten folders into the main sub-folder of one of the shares and with no reason it would not permit me to delete these folders (access denied)

IE:
Media/Cartoon show 1
Media/Cartoon Show 2
Media/Cartoon Show 3

No matter what, I could not delete "Cartoon Show x" Even though I was authenticated to the share and created that file while I was connected to the share on my windows system. I did not want this to happen again, so I figured this time just give permission to the main folder "Media" as everyone, that way I can delete folders without the NAS getting in my way. My career is IT Security and I am well aware of how bad it is to grant permission to everyone/use root accounts stupid 101 stuff of that nature. I actually gained ssh access to the NAS by using a vulnerability in the firmware I found online.

I guess where I really get lost is why it is that when I am on Ubuntu in the same FS with this volume mounted I see all the files. When the drives are in the NAS they do not even show up when I am SSH'd into the box and do ls. Would group permissions in Linux really hide the files from view even when I am logged in as root? It is just the differences in the OS? I do not know what OS the NAS has, its probably some stripped down Unix/Linux distro purpose built. I can maybe see this. It is not like I haven't had my share of NTFS partitions that just refuse to mount in windows, but pop in Knoppix and instantly the files are visible.

Another possibility I am running through my head is this. Unintentionally...the FS I copied to was NTFS. (The 3TB External drive). I borrowed it from a windows machine to just quickly transfer this data. Maybe with all these variables, Copying from an XFS fs to NTFS over rsync, then copying back from NTFS to XFS using mdadm over rsync just was not the best way to get things done. Sure it was fast, but it surely is being a huge pain in the rear. This whole experience is making me think about building my own NAS and saying so long to these proprietary boxes that just put up road blocks up when you want to add more storage easily.

Getting back to my original thought, I have set the user as nobody and the group as 99. Maybe I should be setting this as root instead of nobody. Again, I hate having this NAS where I don't understand if everything is being carried out by the root user or if maybe another user that is on the NAS needs to have the rights to these folders. Not sure how I would go about finding out how it should be setup. Besides blowing it away and then trying to copy some new files to a fresh system so see how the NAS does its thing on its own.

I opened a ticket with Shuttle and there suggestions were flash the firmware...or send the NAS to us....yeah...that's gonna happen -_-

thank you SFX,
-Wooley
 
Okay, spoke with a colleague who I would think is orders of magnitude more knowledgeable about Unix/Linux than I am. "nobody ≠ Everyone". Which might be one of the main reasons I am having issues. I think that if I change the permissions to root 99. I may have a better chance of having access. I just wish I understood what this 99 thing was.

Thanks,
-Wooley
 
I have stumbled upon the root cause of the issue.

In trying to see the permission on the files I see that the ones that are not visible are actually hidden.

Or at least that is my assumption.

All the folders are ./foldername

I am having a hard time trying to find a proper command that would allow me to get rid of all of the ./ in front of the file names. Any ideas? I pretty much want to make any file that is in these main sub directories unhidden, since the only files/folders in them are content that I put there. Meaning no system files.

Thanks,
-Wooley
 
That's not a hidden file. The dot-forward slash denotes the directory or folder you are currently viewing. Dot-dot-forward slash denotes the path of the current parent directory.

So if you have navigated to /usr/files/ and you see a file called "document.txt", the path of that document is ./document.txt. The absolute path is /user/files/document.txt.
 
I think I follow what you are saying.

My problem is that for all the files that do not show up in the directory. They all have this ./ in front of them. Yet they are supposed to be in that directory. All the files that do not have ./ in front of them in the same dir are visible.

Example

/share/atonnas/transmission

ls -l

mediafile.exe
mediafile.txt
bigbookofplays.txt
fullmetalmoleman.avi
./appalachia

so when I ls the directory I get something like this

NAS> ls
ls: ./Penguins.png: Invalid argument
ls: ./Classic: Invalid argument
ls: ./DD: Invalid argument
ls: ./Redist: Invalid argument
ls: ./appalachia: Invalid argument (This is an entire folder of over 300GB of data)
ls: ./incomplete: Invalid argument
ls: ./scripts: Invalid argument
7s-wonderware.mkv autorun.inf
engagevpn.sh
Icon.ico ext.sh
Keys.txt nazareth.sh

Again total Linux n00b here. When things do not work as I expect them to I get lost real fast.

If I saw something like this on Windows I would assume maybe these files got corrupt. However, When I load this volume up on Ubuntu all files are visible and working.

Thanks,
-Wooley
 
FWIW - I'm reluctant to provide any further guidance here as the OP is at serious risk of data loss...

What I will say is back up your data on to the ubuntu box, as at least it can see the data...
 
Agree with sfx2000, get it backed up ASAP. You're going to end up losing data. You need to restore those files to new folders on a different machine and get back to square 1 before you dig into the problem again.
 
I would also not recommend trying to copy your files via command line again the way you did.

Copy all the files from the NAS to a local machine and verify that all of your files appear and look correct. Make sure they have local ownership as well.

Then initialize the disk array in your NAS and let it create all of the folders if that's what it wants to do. Then connect to it over the network via SMB or NFS and copy your files over that way.
 
I have been copying data over rsync to the NAS for over two years via cron. It happens every 3 hours to keep in time with whatever the source device is generating. I doubt it was the way in which the data was copied over that caused the issue. It is more likely that the NAS does not like having files written to its JBOD array from another source.

The original backup of this data is still on the 3TB external so its at least backed up enough. Nothing is critical enough to warrant any worries in my head. Its just media.

For future reference for anyone who might stumble across this issue. Do not mount your JBOD array from a proprietary NAS enclosure to copy files faster/locally. It just angers the gods. Instead copy all the data off, then add the new disk and wipe clean, then copy the data back over the network. Even if it would be faster do use those sexy SATA III speeds.

Thanks for your assistance guys,
-Wooley
 
Again, apologies - it's not that we don't want to help - it's just that we don't want to provide guidance that would cause you to lose your data...

Moving drives from one NAS to another - it's best to start over, esp. if it's across vendors, as each vendor may do their users/groups/locations differently, and this is likely the problem that you ran into with the files not being visible.

Again, if you can mount the drives on a Linux box, move the data over from the drives to a local file system, and then, and only then, you can move over to the new box, reinitialize, and them migrate your data over via CIFS/SMB to the reconstituted filesystem.

Step very carefully... and don't delete/format anything without knowing exactly where your data is...
 
I doubt it was the way in which the data was copied over that caused the issue. It is more likely that the NAS does not like having files written to its JBOD array from another source.

That's exactly why I made my second post. Format the NAS drives and copy the files back via your cron job instead of trying to copy them directly from another source.

For future reference for anyone who might stumble across this issue. Do not mount your JBOD array from a proprietary NAS enclosure to copy files faster/locally. It just angers the gods. Instead copy all the data off, then add the new disk and wipe clean, then copy the data back over the network. Even if it would be faster do use those sexy SATA III speeds.

Bingo.
 
Again, apologies - it's not that we don't want to help - it's just that we don't want to provide guidance that would cause you to lose your data...

Moving drives from one NAS to another - it's best to start over, esp. if it's across vendors, as each vendor may do their users/groups/locations differently, and this is likely the problem that you ran into with the files not being visible.

Again, if you can mount the drives on a Linux box, move the data over from the drives to a local file system, and then, and only then, you can move over to the new box, reinitialize, and them migrate your data over via CIFS/SMB to the reconstituted filesystem.

Step very carefully... and don't delete/format anything without knowing exactly where your data is...

Also JBOD and RAID are not necessarily implemented exactly the same way from one vendor to another.

Frankly this is also in small part why I stick with Windows. It isn't that I know nothing about Unix based OS. I've used Ubuntu a little here and there over the years and I used Solaris 6 and 7 in college a lot. It is just that I am 2x more familiar with DOS and 20x more familiar with Windows than I am with Unix based OS and sometimes how something works in one OS is the exact opposite of how it works in another. I see no reason to make my life harder by introducing different OS on my network that I need to support unless it is absolutely necessary (which is at least 25% of the reason why I rolled my own windows server instead of a NAS or a Linux based server).
 
I see no reason to make my life harder by introducing different OS on my network that I need to support unless it is absolutely necessary.

That's EXACTLY how I feel about Windows. :D If only I could get my users to adopt more stable, easier-to-administer operating systems...
 
We have an internet whose content is heavily based on Adobe Flash. Like it or not. To wit: YouTube, newspapers, CNN, et al.
Try using Flash on a Linux browser other than Chrome. Not. By Adobe's intent.
It'll take years to migrate to HTML5 as an alternative.

I do have a drive here I can boot with Linux Mint. I boot it now and then to update and see what's new.

I guess it's MS Office that keeps me tied to Windows. As well as my bread-and-butter IAR compiler for ARM (consulting work), and several JTAG/SWD tools, etc.
Were it just email and web browsing, I'd use Linux. But non-geek friends/relatives are afraid of it - and use Macs or Windows.
 
If I could full time my computer hobby, I'd probably be running every OS under the sun. Because I enjoy tinkering with stuff. I just don't have the time and IT is not my job (not anymore, anyway and when it was I was a system analyst and later a team leader of analysts and coders. Not sys admin or similar). I wish I could spend the time tinkering with stuff, but if I am lucky every few months I might be able to dedicate half a day to "doing something fun and new" on my network or server. Otherwise my time is only "if something breaks" which is exceedingly rare these days.

I leave computer tinkering to stuff where I might realistically get something more out of it. Like playing with wifi, because most of the time I can do that without knee capping services that my family will complain to me about (at least not for more than a minute or two for a router or AP to reboot) or get myself in a situation where I am going to have to dedicate hours to "fixing" something I "broke".
 
My primary workstation is OS X and I run other OS in VM just to tinker and learn. You've got a point about Flash but it's not insurmountable. I've successfully transitioned people that could barely handle the complexity of Windows XP (aka senior citizens) to distros like Lubuntu and don't hear too many complaints.

My biggest beef with Windows is that it has to use that blasted Registry. With Linux or OS X, when I uninstall an app, it's gone. Trash the app container, trash the pref files, and it's really, truly gone. If an app gets flaky, trash the prefs (which resets it to defaults) and start over. Simple, clean, and efficient.

I've got one Windows machine right now that has a corrupt folder in \SoftwareDistribution and now Windows Update doesn't work. I've tried dozens of Microsoft and 3rd party-recommended solutions and none of them have worked. The next step is to format and reinstall. In the years I've been using OS X, I've restored the base OS from Time Machine a handful of times but I've NEVER had to format and reinstall from scratch.
 

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Back
Top