Welcome to European Tribune. It's gone a bit quiet around here these days, but it's still going.
I'm sorry, I guess I hadn't properly explained myself.  What I wanted to propose was . . .

1 - Establish a government body, and hire people to write good OS (for whatever device categories that need it - including buy not limited to home PC, network servers, database hubs, and the computer-bits that run industry and infrastructure hardware) and basic applications software, with the eye on the medium to long term, and with things like security and reliability built in from the beginning.

2 - As a side project, do basic research into things like software verification and whatever other basic things that we don't understand all that well, but which might be useful for the staff working on 1.

3 - When a bunch of the stuff starts to coalesce, think about standards based on the new stuff, and how to use them to bring everybody else up to par over time.

by Zwackus on Tue Jun 11th, 2013 at 09:41:08 AM EST
[ Parent ]
And I hadn't explained myself because a side-idea got stuck in my head in the writing, and went first, and obscured the bit that I thought was more important all along.  Grrr.

What I'd really meant by standards, at least when I was writing it, would be something less like ASCII and more like an objective way of measuring how secure a piece of software is.  I don't think there's really any way right now to formally state or measure something like that, and this seems like a problem.  Maybe it's utterly impossible, but it seems like it would be useful to have a proper security rating, that is properly testable, and legal restrictions based along it.  For example, anything that accesses the internet must get an 8/10 on the formal security scale, or something.

by Zwackus on Tue Jun 11th, 2013 at 09:45:49 AM EST
[ Parent ]
There are lots of security ratings. They're mostly useless or so time consuming and expensive to pass that they apply to previous generations of tech and can only be passed by the big corporates.
by Colman (colman at eurotrib.com) on Tue Jun 11th, 2013 at 09:50:06 AM EST
[ Parent ]
Well - that's been my point here. Such a thing is simply not possible given the current state of the art, no matter how much money you throw at it and how many clever people you hire.

Even if you devised a perfectly secure system - using quantum signalling, or something - there's still a key on file somewhere, or stuck on a postit note next to someone's desk. Etc.

Even if not, security services will demand a back door, which can be exploited.

Security is relative. Most security is non-existent. A few applications pretend to offer 'almost good enough', with hope rather than certainty.

All information has a market value, and if the cost of breaking security is higher than the value, you're safe, up to a point.

But some hackers like breaking into things just because they can. So 'secure' is pretty much meaningless in absolute terms, and certainly not something you can rely on with any confidence.

by ThatBritGuy (thatbritguy (at) googlemail.com) on Tue Jun 11th, 2013 at 11:22:18 AM EST
[ Parent ]
As computing is a fairly new thing, in terms of human endeavors, there may still be a fair bit of wiggle room when thinking about what might be possible or impossible.  Throwing steady, full time employment at people and asking them to think about the problem may be a waste of time if all one is looking at is the final success of e project.  However, this sort of job creation program seems no more harmful or misguided than most, and worse comes to worse, the engineers and programmers so employed, and their families, and they people from whom they purchased goods and services, will have been better off for it.

And even if the project fails in terms of its main goal, its possible that something good may well come of it.  It's a heck of a lot more likely than putting people to work on weapons tech, where success is its own form of failure.

by Zwackus on Wed Jun 12th, 2013 at 12:52:12 AM EST
[ Parent ]


Occasional Series