FSF really should warn how things would go if done wrong. But then right after every point, they really should tell why it should be done as they suggest usually.
But fact is that after few decades, you do not anymore want to tell “free software gives everyone possibility to…” kind message and you start pin pointing negative things and telling why it does not work. And on that time, it works well for those who knows the FSF philosophy
I would even claim that “evil” companies like Google spread more freedom than GNU generally does… Chrome comes to mind. And firefox wouldn’t have the marketshare it has without big G’s cash.
FOSS on the desktop is really having a very hard time – Red Hat, Mandrake/Mandriva and (open)SUSE have tried it, Ubuntu seems the last distro still fighting for it with some success. Chrome-OS and MeeGo might one day be the OSses bringing it to the average user – backed by commercial parties.
Only when commercial parties picked up FOSS did it really reach a larger audience. IOW pragmatism works… And what drives a large proportion of the FOSS contributors? Fun, not principles, they often come only some time later.
So I think we need to focus on building good business models, then the users will come and we can make a difference. At some point in time people will appreciate the freedom simply because it actually helps, not because we’re telling them to care.
You are dooming the whole FOSS by only the desktop usage. Google Chrome OS and Android are not new OS’s. They both use same OS, the Linux kernel.
And Desktops are not small marketshare for Linux. Linux has over 10% markethsare on desktops. Do not trust the falsefied netstatic or other “studies”. If we check out even few mainstream distribution userbase, we get over 50 million. And even alone in Brazil, there is over 52 million Linux users for Debian (Users != installations). All companies are trying to proof their own statics by the amount of the sold Windows lisenses what does not work at all.
Neither does PC sell rates.
Most windows licenses what Microsoft sell, goes for companies. But companies basic policy is to have OEM license with the PC and then they install own bought license from the Microsoft over it. So Microsoft sell two license even that only one is used. Not to mention that every year lots of companies throw away old computers with OEM license on them. Bought new OEM PC’s and reuse the old separately bought licenses from MS. That way company saves lots of money while two OEM licenses are not being used at all.
And microsoft has told that there is 350-500 million users getting updates from them. PC’s sold in a 5 year gives littlebit same amount of users when. So we could give estimation that there is 800 million PC’s out there. And that is overestimation. Not close the 1 billion or 1.5 billion what PC OEM’s want to believe.
Microsoft gave statics from Windows 7 selling rates. Netstatics told that 7 got 9% markethsare when MS was sold 60 million licenses. And that does not even include pirated versions or RC versions. If you get 9% (lets say 10% it is easier then) with 60 million, how many would be a 100% markethsare? Only a 600 million computers. And if OEM’s are using double licenses, we can throw away right away 1/4 part of the computers. And when worker usually have computer at job and at home, there is two computer for single person. Two licenses for one person (lets say to 3). So we could easily get logical math that there is about that 600-700 million computers.
And when counting estimated userbase of mainstream Linux distro users, we get about 100-120 million. That is big part from that 600-700 million PC’s.
Just the OLPC has sold out about 10 million PC’s and that is not much when compared to Debian, Ubuntu, openSUSE, Mandrive, Fedora and few other mainstream linux userbase.
I could be that true desktop marketshare for Linux is 10-15%, biggest would be about 20% now what is very over estimated but not impossible!
And that is only on PC’s. There is other marketshare, the Macintoshes (aka Mac’s). They are not PC’s but they are usually counted together with that. When compared to PC’s, there is about 10-13% of the Macs with same scale. So when we count for personal computers (personal computer != PC) we could get that there is about 20-25% marketshare for other than Microsoft. When thinking that MS has 70-75% share, it is very logical when checkin streets and what people use at home. Every fourth/fifth person use Mac OS X or any other software system ran by Linux operating system.
And when it comes to servers, Linux has 60% marketshare and MS has 40% marketshare on just webservers. Supercomputers Linux has over 83%, Microsoft 1%. Mobile devices, Linux has about 35% marketshare (remember, Linux kernel is the operating system. Linux is monolithic kernel and not microkernel) because Android, MeeGo, WebOS, Bada and many other Linux distributions for mobile phones need to be counted as one. Biggest is Apple with their XNU operating system (OS in iOS, it is the same OS as what is running the Mac OS X. XNU is not kernel. The kernel is Mach 3.0 what is microkernel).
In the end, we could get estimated statics that on desktops Linux has 10-15% marketshare.
Servers it has about 50-60% (web + database etc)
Supercomputers over 80%
Mobile devices over 40%
And when it comes to embedded systems, Linux has even there very strong position.
Linux is everywhere, almost any ADSL and Cable modem, some TV sets, most NAS-devices, MP3 players, DVD players and so on use Linux kernel as their OS.
We should not doom FOSS success by the statics what Linux has on desktops. We need to get the whole picture and when we (if we would) know the fact that Linux kernel is the operating system, the whole picture is more clear. We have Linux (and so on FOSS, because Linux OS is licensed under GPLv2) almost everywhere.
It is far away from the non-success. People just do not know it because there is lots of companies marketing how “they” are leading. And we need GNU to tell the critic what is wrong there. But what we definetly do not need, is their propaganda of GNU/Linux. Because Linux is not microkernel. GNU can work with their own OS called HURD as much as they want! They just need to replace their used GNU mach microkernel with other kernel like pure Mach 3.0. Otherwise HURD can not compete with Linux in OS markets. Even FreeBSD is better on that.
That concept always depends on the context, the goal one is trying to achieve.
If your goal is to get as many customers as possible, it can be pragmatic to include software of different sources, even if some of these sources do not share your enthusiasm for FOSS.
If your goal is to be able to participate in all aspects of digital life using FOSS then using propritary software is not getting you any further.
The FSF has actually a very pragmatic stance on how to reach this goal. They prefer reciprocal licences but also know that the goal might be easier to reach if you also make use of non-reciprocal or semi-reciprocal licences, e.g. BSD or LGPL respectively.
And in the long run, it is easier to get all software free when we do not use at all closed source software. That in one way forces users to understand the problem and then give the developers a idea what to really do and focus to bring needed software.
But that is not in short run clear or easily understanded. Especially by the users who need to get something done NOW and not after 1 year.
But when watching the distributions of Linux development in the 2000-2004 years, the long run has worked much better way. The closed source had been kept away and development was rising quickly. But after 2004 when Ubuntu and other started to use more closed source software, the development has stopped on those areas. Users are depending about big corporations (like Adobe) what does not like to update software. But when compared to open source software like KDE SC, the development is very rapid. The closed software really ties users and limits their possibilities to actually get better tools.
And when looking to future in very long run (our children’s and so on). The open source is only real way to go. There is no space for closed source than in very rare cases (in security, like watermarking the photo so you can not remove the watermark. Because if you know the watermark algorithm, you can remove it from the photo).
So even that FSF is on edge about closes source and they seem to preach very much about it, without making sense for all. They are on right course to achieve better place for human beings when it comes to computers. And more and more the hardware is being used only as the base, while the software is the “spirit” what actually makes the hardware being used.
Like right now when we look out for GPS navigators for cars. We do not get map updates, we do not get software updates. We need to buy new device! Even that we have touch screen and GPS receiver same way in all devices! It just would be so awesome just to get the software updated with small fee (like 10-20 euros in few times a year when needed) and only buy new device when wanted to have bigger device or something else, like faster calculations and so on.
The closed source really is evil in long run and we all suffer from that.
Quote from the site: “Most contributors I know see no problem with proprietary services like Dropbox and Ubuntu One. With very few exceptions, most companies that work in the community have settled on some mixture of proprietary and open source services to try to find a working revenue model. In short, the free software philosophy seems to have gone out the window for most users and contributors.”
And even that article says that the pure free software idea does not work. I am against that logic. Canonical is not so good for F/OSS community by the ideas and opinions it developes to new users minds. It is almost doing more bad than good.
People do not know the technology. They even now believe that Ubuntu is not Linux but Ubuntu is operating system what is better than Linux. (Not to mention that they do not even know that Linux kernel really is the operating system, stuff what not even many GNU fans know at all)
The users believe that software what is packaged to Ubuntu or comes preinstalled on it. Are developed for the Ubuntu and it is not sure at all that they would exist in other “OS’s” (Distributions for us, for them different OS’s).
And Canonical “stoled” the GNU’s philosophy of the free software and marked it as own. So too many Ubuntu user believe that Canonical invented that great idea, and in worse situation, they believe Canonical invented Free Software! (What is not even GNU’s invention. In the beginning of the computer science, all the software was free software (not as beer but as speech) and after few decades, the universities and students started privat companies what closed the source.
And Ubuntu users are causing lots of problems by not understanding how the technology works. Ubuntu users do not know what to blame when they get problems. Like if there is problem in GNOME, they easily blame OS (Linux kernel). If there is problem in application what is packaged wrong way to Ubuntu, they blame the application and not the packaged version even the problem does not exist in upstream, because they do not know how upstream-downstream development models.
And because they do not understand the open source development models and software technologies, they do not understand the Linux distributions and how to solve problems. Like many blames KDE SC from not having specific application and then they say how GNOME is much better because when they install Ubuntu, they have more software available. So basicly, they mistakes desktop environment and software system as one.