What's new

A huge surge in NAS, cloud and backups

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

I was talking about the flash and ram used seperately. Such as routers only having megabytes of space while phones have gigabytes of flash space.

There's no excuse for a core component of one's network to have less resources than a low-end smartphone, seriously...

And that was a project last fall/winter to address that - spent a lot of time between San Diego, Santa Clara, San Jose dealing with that - pretty advance architecture SW wise, lot's of neat ideas... not a lot of traction in the consumer OTS space, but we did eventually sell the project off to an Asian company that had vertical alignment with the carriers over there...

I should have known going in - having worked the numbers for years, but it looked like maybe things would align...

And that's why one still sees $350-500 dollar consumer off the shelf routers these days...

I can feel good that we hit gigabit routed performance on the platform there - with NAT even...

(FWIW - there's a lot of horsepower in the WRT1900ac(v)2 and later that has not been unleashed - and most of this is because of Linksys' legacy code holding it back, not because of the chips themselves)
 
As for Samba - considering the functionality, it's huge, and that's not counting the dependencies outside of the core Samba binaries...

On top of that, add the twisted waf build system that is very crosscompile-hostile. I tried twice to get it to cross-compile, and gave up each time.

It reminds me of sendmail (with its m4 system), where the devs used some esoteric build/configure system that nobody else uses, just because they could. It annoys me whenever engineers try to get fancy...
 
Given what Merlin said about NAS's, I'm curious how many "home" NAS's support SMBv2 or SMBv3.

The serious products (all x86 products from Synology, QNAP, Asustor, etc...) all support at the very least SMB2, and some of these also support SMB3. For the rest, it's probably a toss of a coin as to whether they support SMB2 or not.
 
It reminds me of sendmail (with its m4 system), where the devs used some esoteric build/configure system that nobody else uses, just because they could. It annoys me whenever engineers try to get fancy...

Gah... who uses Sendmail these days... postfix is hard enough to configure...

FWIW - sendmail, like bind, is a nice reference implementation, but performance is better with alternatives...
 
On top of that, add the twisted waf build system that is very crosscompile-hostile. I tried twice to get it to cross-compile, and gave up each time.

Might be another reason why Apple walked away from Samba in OSX - not just for the GPLv3 reasons...

High time for someone to do a FOSS SMB implementation outside of Samba perhaps..
 
Gah... who uses Sendmail these days... postfix is hard enough to configure...

I still manage one Sendmail server for a customer (it's an inbound server). Back in the day, all my Linux-based mailservers were Sendmail. I had to learn to deal with m4 config files...

I manage a few Postfix servers (mostly outbound/relay SMTPs) and found it very easy to manage, even when dealing with multiple instances on separate interfaces and ports. That one customer needs one that's SASL-authenticated for clients, and another on a separate port that's only IP-relayed for use with internal web applications.

If you want something that's really complicated to configure/manage, look at Exim. I have a few of them (because of cPanel), and any basic config change is a PITA compared to Postfix.

High time for someone to do a FOSS SMB implementation outside of Samba perhaps..

Been saying that for a long time. But with SMB becoming ridiculously complicated these days, it would have to be only a barebone implementation to be realistic. Something doing SMB block protocols, without the AD/Registry/RPC non-sense that most applications don't require. The line as to where to cut the bloat becomes a bit fuzzy when looking at krb5 or encryption... I take it Apple's implementation is closed source?
 
I take it Apple's implementation is closed source?

I don't think it would be something useful for Linux or BSD, as they implemented the SMB/CIFS as a service layer inside the kernel... one could take a look at darwin opensource and see, but likely not a productive effort there...

Airports still use an older version of Samba for their Airdisk sharing... (which is running on top of NetBSD oddly enough)
 
Been saying that for a long time. But with SMB becoming ridiculously complicated these days, it would have to be only a barebone implementation to be realistic. Something doing SMB block protocols, without the AD/Registry/RPC non-sense that most applications don't require. The line as to where to cut the bloat becomes a bit fuzzy when looking at krb5 or encryption...

Apple isn't the only one that implemented SMB/CIFS outside of MSFT or the Samba team... OpenStack's manilla project has their implementation - but again, non-trivial to break it out as a standalone, as it has dependencies on other openstack services..

I completely agree though - samba has a very feature rich implementation, as such, it does raise the complexity level up, and many of these features are not used in most cases...
 
A samba-embedded project would probably be our best chance to ever see a solution IMHO. A group of developers would need to fork out Samba, either 3.6 with the multicall/optimization patches from OpenWRT applied and backport stuff to it, or 4.x and start going into a stripping frenzy to slim it down a bit.

I'd be surprised to ever see an open-source reimplementation of the SMB/CIFS protocol from the ground up, even tho there would be a pretty large market for such a product. At best a proprietary implementation, which would be sold to OEMs. Might possibly be what Tuxera is offering (beside their optimized samba 3.0.25 implementation that Asus has been experimenting with).
 
A samba-embedded project would probably be our best chance to ever see a solution IMHO. A group of developers would need to fork out Samba, either 3.6 with the multicall/optimization patches from OpenWRT applied and backport stuff to it, or 4.x and start going into a stripping frenzy to slim it down a bit.

It would be a reversed engineered implementation....could be done with probably an 80 percent coverage on a single daemon - but who's going to pay for the development there?

I'll toss this out - it can be done - but it's a lot of work...

http://www.informit.com/articles/article.aspx?p=353553&seqNum=4
 
but those that arent running it hear of the horrible news and will start backing up if they havent already. Many youtubers and writers are suggesting NASes.

Most ransomware will encrypt data on mounted network drives if you have write permissions.

Most cloud providers are able to decrease prices if demand goes up.
 
Most ransomware will encrypt data on mounted network drives if you have write permissions.

Most cloud providers are able to decrease prices if demand goes up.
but most dont know about encryption and write with malware on network drives.

If demand goes up, the price goes up. businesses arent interested in offering a good deal anymore, they want the most profit so they use algorithms to determine pricing which is always ends up being, low pricing during low demand, high pricing during high demand. Supply is taken out of the equation altogether.
 
but most dont know about encryption and write with malware on network drives.

If demand goes up, the price goes up. businesses arent interested in offering a good deal anymore, they want the most profit so they use algorithms to determine pricing which is always ends up being, low pricing during low demand, high pricing during high demand. Supply is taken out of the equation altogether.

There is a lot of competition in cloud services. While price has traditionally gone up with demand, the rule mainly applies to scarse resources where the limited supply will drive the prices up. With software this will only somewhat apply on an uneven field without proper competition (and even there for different reasons).

The running and support costs of a cloud service will go up with increased usage, however it's only a small part of a software conpanies expenses and other areas will not increase. Unlike with scarse resourses increased sales will lower cost pressure for a software company as they will end up with relatively larger net revenue. It's the opposite of a traditional field where increased demand could increase the cost of the raw materials or require investments in manufacturing, storage and shipping (and larger inventory).
 
There is a lot of competition in cloud services. While price has traditionally gone up with demand, the rule mainly applies to scarse resources where the limited supply will drive the prices up. With software this will only somewhat apply on an uneven field without proper competition (and even there for different reasons).

The running and support costs of a cloud service will go up with increased usage, however it's only a small part of a software conpanies expenses and other areas will not increase. Unlike with scarse resourses increased sales will lower cost pressure for a software company as they will end up with relatively larger net revenue. It's the opposite of a traditional field where increased demand could increase the cost of the raw materials or require investments in manufacturing, storage and shipping (and larger inventory).
Its quite the opposite, the more that use computing resources, the cheaper it becomes, why do you think there are massive compute clusters for research and such?

As i was saying the problem isnt to do with supply. As technology gets better, things like cost per GB, cost per B/s and other things get much cheaper, but as i was saying companies dont really care about supply demand graphs anymore, they care about profits. So to maximise profits they take supply out of the equation. A lot of places are starting to use algorithms to set prices and the main factor is profit, so while the pricing appears to follow the supply demand economics in truth this isnt so. This is because if you had an unlimited supply the algorithm would raise/lower prices depending on demand only, so if a lot of people are buying the price is increased, if very few are buying the price is decreased to encourage sales.

For example online backup services could offer 100GB to everyone, they may not need to have 100GB physically available for everyone but as the usage differs, they only need to have enough to fit the average use. Hence why i was saying that supply does not matter.

take gas prices for instance, there is a lot of supply currently (despite it being a limited resource), yet fuel costs many times more than it did 2 decades ago, much more than the rate of inflation. Its not the supply, its the demand and that the companies involved all focused on maximising profits instead (besides many of the companies had government links, so the money went to their origin governments). Now algorithms are used to determine the price of fuel at the hour based on how cramped the gas station is at that time, to determine pricing of items for different individuals based on their willingness to spend.

As i said its designed to take all the money it can from you. Implementing this sort of economic practice has a serious price to pay however as there is a defined finite amount of wealth. It will get to a point where money just becomes worthless. I always say that the more overpriced an item is the less valuable it actually is.

There also needs to be a decently priced online service with decent backup space as this is something that would expect regular payments. I find many NASes to be priced too high for the average consumer which leaves a gap filled by routers nowadays.
 
take gas prices for instance, there is a lot of supply currently (despite it being a limited resource), yet fuel costs many times more than it did 2 decades ago, much more than the rate of inflation

Depends on where one lives - gasoline/petrol, relative to income, is actually lower that it has been in a long time... so some of this is market pricing based, and a lot of what you're paying per gallon/liter is actually taxes on the delivered product...

So probably not the best reference...

Cloud and Storage - pricing is coming down very fast, as there's a fair amount of competition, and the price/storage values are coming down incredibly fast...

100GB of cloud storage is essentially free these days, when the cost of 1 year of 1TB is less than 100USD retail to the end user...

Without getting too political - this is also the argument against data caps - bandwidth is like an ice machine, if it's not consumed, it's wasted - but the ISP's and Carriers suggest otherwise, that it's something to be metered and measured, and conserved where/when necessary... which is essentially wrong - it should be based on a bandwidth commitment - someone signs up for 150Mb down/10 Mb up - it should be that, even if it's 100 percent utilized on a 24/7 basis...
 
Last edited:
Depends on where one lives - gasoline/petrol, relative to income, is actually lower that it has been in a long time... so some of this is market pricing based, and a lot of what you're paying per gallon/liter is actually taxes on the delivered product...

So probably not the best reference...

Cloud and Storage - pricing is coming down very fast, as there's a fair amount of competition, and the price/storage values are coming down incredibly fast...

100GB of cloud storage is essentially free these days, when the cost of 1 year of 1TB is less than 100USD...

Without getting too political - this is also the argument against data caps - bandwidth is like an ice machine, if it's not consumed, it's wasted - but the ISP's and Carriers suggest otherwise, that it's something to be metered and measured, and conserved where/when necessary... which is essentially wrong - it should be based on a bandwidth commitment - someone signs up for 150Mb down/10 Mb up - it should be that, even if it's 100 percent utilized on a 24/7 basis...
well my main point is that irrespective of supply, prices are set. It could be the current low price of online storage is because of the lack of demand for it and the high prices of NAS is because of the higher demand for it.

In the US you have no reason to cap data as most things are hosted in the US. In other countries a lot of traffic goes out to places like the US. When i look at what US ISPs do with their infrastructure it is appalling considering the kind of subsidies and profits they get at the same time. Infact US ISPs would profit a lot if they simply gave more upload as the US is a major exporter of data since most of the internet ends up being hosted in the US. Most content hosted in other places are mainly local because of language and country restrictions like with china.
 
well my main point is that irrespective of supply, prices are set. It could be the current low price of online storage is because of the lack of demand for it and the high prices of NAS is because of the higher demand for it.

Market sets the pricing - competition helps...

Look at 3g/4g pricing - when one has multiple players, pricing goes down - less players, pricing is stable or goes up (except in certain countries, like Canada - three main players, but there's a bit of cooperation going on there, which is a policy issue for the govt perhaps - not much different than the energy/transport trusts back in the early 20th century that the US had to break up)

I'll agree though - NAS prices are a bit high, but as long as folks are willing to pay... but it's also an opportunity to disrupt that business by coming in at a lower margin perhaps...
 

Similar threads

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top