Archive

Archive for the ‘Technology’ Category

Another Reason Why I #%$@& Hate Flash

October 17th, 2015 Comments off

Nowadays, it seems that half of the videos I try to view on the Internet bring up this dialog:

Flash Cookie

On my Mac, no matter which browser I use, this message will appear—and cannot be dismissed. It will sit there, doing nothing, whether I try to click on “allow” or “deny.”

The only way you can get rid of these annoying intrusions without a convoluted hack is to go to Flash settings and always allow them to store data on your computer.

However, that “stored data” is what is called a “cookie,” the kind of thing that often invades your privacy and works for marketers—in this case, a Flash Cookie, which is even worse, because it is not restricted to one browser, and is not purged when you clear cookies from all of your browsers.

Added to this: Flash is like a magnet for hackers. It is like installing a dog door for a Great Dane on your computer, allowing intruders in with relative ease. Every few weeks, my videos shut down and I have to install yet another new version of Flash, which I do by going directly to Adobe’s site for the download. Why don’t I use automatic update or a link? Because it is a common avenue for infection, that’s why. And even if you do update, it’s still not protection—the latest version of Flash was released with a zero-day exploit already active for it.

Flash is almost as much a bane to the Internet as spam is. It is high time is was put to a quick, violent death.

Categories: Technology Tags:

The ABC’s of Data Deletion

September 13th, 2015 Comments off

Hillary claimed ignorance regarding how the process of “wiping” a hard drive works. That doesn’t surprise me, but the ignorance of journalists in the matter is surprising. Doesn’t anyone do even basic research any more? Here’s the Washington Post—the Post, for crying out loud:

“To make the information go away permanently, a server must be wiped — a process that includes overwriting the underlying data with gibberish, possibly several times.”

Really? A “server” must be wiped? With “gibberish”? Oi. Hold on, I’m about to get into the nuts and bolts of it. If you don’t want to know how erasing data from a computer works, move on—but it’s good knowledge to have, especially if you want to protect your data when disposing of an old device!

First of all, a “server” is not being wiped, the hard drive is. A server is technically not even a computer, it is software running on a computer, and the computer running it is often referred to as a “server”—but the part that is wiped is the data on the hard drive.

Next, “wiped” is not the most technical term, it is at least somewhat vague.

There are four basic ways to delete data on a disk: first, delete it from within a program; second, simply trash the files and empty the trash; third, reformat the hard drive; and fourth, to “zero out” the drive.

The first three ways of deleting data that I described (deleting, trashing, and reformatting) are, depending on the circumstances, recoverable. None of them actually destroy the data; in all three cases, the data remains on the disk, but either (depending on the file system used) it is marked as occupying space that can be taken up by new data at any time, or its directory information is erased so the computer “sees” the disk space as “blank” and therefore it’s allowable to write new data there.

In both of these cases, data stays on the drive until the computer, at some point, needs to save newer data, and decides to use the space taken up by the older data. This happens bit by bit, and depending on how full the drive gets, “deleted” data can remain on the disk for weeks, months, or even years. Data is often only partly destroyed. If the disk used is nearly full, then perhaps most of the data is destroyed as all the space is quickly needed; if the disk is mostly empty, there’s a good chance most of the data still remains, but the data still could be partly or fully overwritten.

That last way to delete data is what the Post is rather cluelessly referring to, and is the only way to securely erase data from a hard drive.

The technical term for this—the one everyone should be focusing on—is to “zero out” the disk. This is a process in which the computer literally writes all zeroes (rather than zeroes and ones) in every single place that the disk contains data. (It does not write “gibberish,” which would be random zeroes and ones.) The “zeroing out” process usually completely destroys the data that used to exist on the disk.

You may wonder why this process is not always used; the answer is, it takes time. When you save data on a disk, it takes a certain amount of time; to actually destroy the data, it would take the same amount of time. Try saving a long video from your smartphone on your computer; it might take a minute. However, you throw it in the recycle bin and then empty that, it takes almost no time. That’s because the data is not being zeroed out. It’s not necessary, and people would be annoyed if emptying the trash took several minutes every time. To zero out a whole disk takes hours.

You may need to consider this the next time you sell, give away, or even throw away an old computer: unless you “wiped” the hard drive in a way that took hours to accomplish, your data has not really been erased, and can be recovered!

So, is the data really destroyed when you zero out the disk? It depends. Remember the post wrote that it must be done “possibly several times.” Older hard drive technology was not so precise, and the marks used to indicate a 1 or a 0 might not be in exactly the same position, in which case overwriting with a zero might not completely cover up all the previous data. (It would be really hard to recover more than just fragmentary data, though.) As a result, older drives would need to be zeroed out many times. Apple has the option of zeroing out the whole drive 7 or even 35 times! Just once can take a few hours, so, well, you do the math.

Newer hard drives are more precise, and may only need to be zeroed out just once. I am not certain, but there may be a way that super-uber-geeks have to still recover that data, but I would bet against even them getting more than just a few crumbs here and there. It’s supposed to be pretty secure—but zeroing-out software still provides for the option of multiple overwrites.

Now, you may be wondering, how can I do zero out my data? If you have a Mac, zeroing out is built in to the OS; if you look in the Finder menu, under “Empty Trash,” there is an option to “Secure Empty Trash”; this will zero out only the data you have in the trash. If you open an app provided by Apple called “Disk Utility,” there are options to “securely erase” whole disks. For Windows, you can download free software that does the same thing. Just search (in a trusted software source) for “zero out utility”.

If you don’t feel like you can do this yourself, get a geek friend to do it for you. If you can’t, then be aware that your data can be accessed by the next person to get their hands on that device.


Zeroing out is what Karl Rove and the GOP did when they tried to destroy 22 million emails they didn’t want the public to see. In 2010, an archive of the email was in fact found—but we still don’t know what was in them, as they are going through a review that has so far taken 5 years, presumably to week out classified material.

And then, there’s another flaw in this now-raging “news” story: the only information is that the company that maintained the server—but obviously not the only ones to have access to it—said that they didn’t have a record of it being “wiped”—but not only does that not mean they didn’t zero it out, it also has no bearing whatsoever on whether or not someone else, like Hillary’s IT guy, zeroed it out. If they were smart, then they would have taken the email archive, deleted the emails they felt were personal, then copied the reduced archive to a new disk, and then destroyed the original archive data.

It appears that Hillary’s email deletion was far more casual—but I’d be willing to bet good money that if the files can be recovered, Republicans will waste no time rifling through every last one they can find and then leaking the juiciest ones, probably completely out of context and even partially made-up to boot, just like they did with the Benghazi emails.

And in the end, this is all about nothing more than an attempted smear job. Conservatives could give a rat’s ass as to whether Hillary actually did anything wrong, and they sure as freaking hell do not give a crap about whether national security was at risk (these are the people who outed a CIA agent for political payback, remember—one of they key issues discussed in emails the Bush White House deleted). No, this is about shooting Hillary down and nothing else.

The press should be ashamed that they’re giving this more than back-page attention.

Categories: Journalism, Technology Tags:

Unbundling Is Overdue

February 7th, 2015 1 comment

The FCC’s recent stance on Net Neutrality is nice and all, but one critical elements is still missing: unbundling, which requires carriers that own infrastructure to lease their last-mile connections with competing services at low, regulated rates. You might think that it’s unfair to force companies to share private resources, but (1) these resources are built on public land, and (2) were heavily subsidized by federal, state, and local governments—i.e., you, the taxpayer. They may own it, but you mostly paid for it.

This egregious sop to the telecoms largely goes unnoticed, but the lack of bundling more or less prevents meaningful competition, thus causing higher prices and slower service. Unbundling in Japan and Europe has created healthy competition and far superior service. For example, I get fiber-optic FTTH Gigabit service at home, which includes telephone service (we could add TV for a nominal fee if we wanted) and my monthly bill is less than $60. Plus we get $150 – $300 per year off our two cell phone contracts for using the same carrier for both. $30 a month can get you 100 Mbps service.

In the U.S., how many choices do you get for Internet service? In Japan, it’s not uncommon to have your choice of half a dozen providers offering various deals and packages when you go to any electronics store and visit the carrier counter.

My Computer students are always shocked to hear that Internet service in the U.S. is slower and more expensive than in Japan. Yes, some of it is due to the U.S. being a larger country, but the lack of strong government incentives, too little regulation, or any kind of comprehensive national policy to promote a healthy market are far more responsible for the shoddy product so many Americans suffer with nowadays.

Oh, Steve…

August 13th, 2014 1 comment

Steve Ballmer, in 2007, when the iPhone was introduced:

There’s no chance that the iPhone is going to get any significant market share. No chance. It’s a $500 subsidized item. They may make a lot of money. But if you actually take a look at the 1.3 billion phones that get sold, I’d prefer to have our software in 60% or 70% or 80% of them, than I would to have 2% or 3%, which is what Apple might get.

In a video interview, he said essentially the same thing, concluding, “Let’s see how the competition goes.” That seven years ago.

From a report on mobile devices released today:

net activation by platform: iOS=67&, Android=32%, Windows Phone=1%

And that’s for enterprise, traditionally a market dominated by Microsoft. In the video interview, Ballmer said it wouldn’t appeal to business customers “because it doesn’t have a keyboard, which makes it not a very good email machine.”

Poor Ballmer; you’ve got this other Steve, Steve Jobs, who now is secure in his reputation as a tech visionary, while Ballmer’s claim to fame will probably be as “Monkey Boy.”

Categories: Gadgets & Toys, Technology Tags:

Going a Bit Too Low on the tech

July 21st, 2014 7 comments

Headline: Foreign Governments Consider Reverting To Typewriters To Thwart NSA Surveillance. From the story:

In a television broadcast, German politicians said members in the Bundestag — Germany’s parliament — are strongly considering dropping email altogether, opting for typewriters and penned notes to prevent the United States’ National Security Agency from eavesdropping, the Guardian reported Tuesday. Russian government officials also said last week they were reverting to paper communications. The FSO, an agency that protects the Kremlin government and other top officials, has already ordered nearly two dozen typewriters, according to USA Today.

“Any information can be taken from computers,” former head of Russia’s Federal Security Service, the domestic successor the KGB, Nikolai Kovalev, told Izvestia. “[F]rom the point of view of keeping secrets, the most primitive method is preferred: a human hand with a pen or a typewriter.”

Seriously? Nobody ever heard that computers and even entire LANs can be disconnected from the Internet?

Categories: Technology Tags:

Sometimes You Wish You Wrote Stuff Down

April 19th, 2014 Comments off

No way I can prove this, but about ten years ago, when lecturing in my survey course on Computers, we were reviewing computer history. I pointed out the evolution of computer technology—from vacuum tubes to transistors up through IC chips and multiprocessors—of hardware types, from computers which were building-sized, room-sized, cabinet-sized, desktop-sized and mobile types from laptops to handhelds—and of user interfaces, from paper tapes and punch cards, and from the command line to the GUI and to multitouch. I showed them these trends over time and then asked them to project, to imagine where things would go over the next half century.

Usually, some students asked me to answer the question myself. I would sometimes talk about surgically implanted computers, or focus on interface elements such as motion or voice control. Unsatisfied that I was not responding with a coherent image, I developed—remember, this was ten years ago, before even the iPhone was out—a single concept.

When asked what a computer in the future would look like, I took off my glasses, and pointed at them. I noted that they had all the elements you might need for input and output in a compact space. The lenses could become displays, the temples (the parts that extend over the ears) could house microphone and speakers. Whatever components needed locally would fit into the frame, but the unit would depend largely on computer power housed elsewhere, accessed wirelessly. Cameras would be mounted at the far end of each lens. Control could be by voice, or else via a motion-control visual interface, a la Minority Report. After 2009 I pointed to Kinect. As far as use, I noted that social media might extend into shared experiences; you go shopping, you can take your friends along, with them seeing what you’re seeing, for example.

Over a few years, I developed this idea and fleshed it out. And then, damned if Google didn’t steal my idea. Not having blogged it or incorporated it into my class web site, all I could do was lamely point out that I had the idea years before Google came out with Glass.

On the other hand, the idea was kind of inevitable, and looking back, others had it before I did, and did write it down. I believe that a similar idea was included in David Brin’s 1990 novel Earth, and John Varley interestingly covered an evolution of of this type of future technology (up to nanites being sprayed onto the eyes) in his Red Lightning and Rolling Thunder novels in 2006 and 2008. I’m sure many other novels over the years also laid out the idea, and countless thousands of people had thoughts similar to mine and similarly did not write them down.

Still, it’s fun to be somewhat ahead of the curve….

Categories: Technology Tags:

Progress

March 31st, 2014 Comments off

Looking back in my “This Day Past Years” area, I note that I posted on a Toshiba battery announced in 2005. The press release touted a new Li-ion battery which “can recharge 80% of a battery’s energy capacity in only one minute.” Toshiba said that they would “bring the new rechargeable battery to commercial products in 2006.”

In 2008, they announced another breakthrough: a new, much-improved battery, this time a “Super Charge Ion” battery, which would “recharge to 90 percent capacity within 10 minutes.” Ummm… wait.

However, they also “said the technology is still a ways off from making its way into computers.”

Well, here we are, 2014. I still don’t have a battery that can charge to 90% in 10 minutes, or to 80% in one minute.

Maybe Toshiba will announce a new battery this year, which will charge to 100% in 20 minutes, but may not be released for another 10 years.

Categories: Gadgets & Toys, Technology Tags:

Going Solar

November 4th, 2013 2 comments

I am still seeing Americans going nuts over Fukushima as they largely ignore the extent and the damage caused by fracking. Others are now stating that it is time for us to go even more nuclear. To me, this is like arguing over whether you should submit yourself to a few dozen blasts from a shotgun using rock salt or a few body hits from a .45.

It is WAY past time to go solar.

And I don’t mean that just more people should go solar, though they should. I mean that instead of just loaning a bit here and there to solar companies, we should go all-out and start buying and installing solar on all levels.

Flatscreen LCD monitors should not have come as quickly as they did. At the time they were becoming popular, they were a poor alternative to CRTs. They were less bright, less sharp, were less able to present various resolutions sharply, had worse viewing angles, and were way more expensive. They had only one advantage: they were thin. That was enough for a lot of people to pay a lot extra for them.

So what happened? Because sales were good, producers started gearing up for them more. Corporations went all-out researching how to make the screens brighter, sharper, better, and cheaper. So now, LCD screens are superior to the old CRTs in so many ways, and they can be had for cheap too.

The lesson: you spend money purchasing a technology, it becomes a big market, and the technology improves, becoming more efficient, more effective, and much cheaper.

And that’s what we need right now: huge investment in solar. Maybe not Manhattan-Project huge, but something big.

One of my favorites: Solar Roadways. The idea that solar panels could be built into modular road, parking lot, and sidewalk panels. These panels would be covered with a specially engineered glass, tough enough to handle semi trucks rolling over them, and textured enough not to let cars slide under the worst conditions. Such panels would be much easier to install and maintain than asphalt or concrete; they would including heating elements to get rid of snow and ice; they would allow easy access to and protective cover for conduits for transmitting electrical power, data, water, sewage, and whatever else you want to run under them; and they would include LED lighting allowing for better night road markings and even interactive signage.

Initial costs would be high, but there are three mitigating factors. First, since laying asphalt and concrete also cost certain amounts, the real costs for the solar road are only what they would be above the costs of laying conventional roads. If laying asphalt costs $1000 for x amount of surface, and solar costs $1600 for the same amount of surface, then you’re really only paying $600 more for the added solar feature. Second, the energy the roads produce will pay back over time, eventually making the roads cost less, perhaps less than conventional materials; also, since maintenance is easier and additional benefits can be had, the solar version pays for itself even more quickly. And third—most importantly—the massive use of solar for such projects would spur research and production, creating solar technologies with higher efficiency and lower costs. The government should not be loaning money to solar companies. They should be buying solar technology, in large quantities. Investors will then have no trouble getting loans or capital elsewhere.

The estimate is that if one-third of all pavement in the U.S. were converted to such solar panels, we would not need any alternative forms of energy. That will be worth even more in terms of trade deficits, debts, and energy stability.


There are several problems, the greatest of which is that right now, solar is not immediately cheaper than other kinds of energy; instead, it is an investment. Pay a lot today in order to save even more over time. But over the long run, it is cheaper. The problem there is that most people don’t act on the long run. They see the sticker shock today and think, maybe later.

Another problem is that solar has been ridiculed. Are you some California elitist liberal who thinks solar is the bee’s knees and love it just as much as your Whole Foods organic products and your New Age aromatherapy treatments? Har! What a dumb hippie you are! Didn’t you hear about Solyndra?

To dip briefly into conspiracy-theory possibilities, one could easily imagine this being opposed by the powers that be because it is essentially home-grown energy. Beyond the panels, no corporation can control the sunlight collected or trade on its future cost. That severely limits the profit-taking. Screw the fact that it is incredibly more friendly to the environment, infinitely more safe than oil, coal, or nuclear, and that it will cost billions less in terms of clean-ups in the future. If it can’t be controlled to yield hundreds of billions in profits, then what good is it?

This bias is evident in other ways. Often laws made in the shadow of energy lobbying even discourage solar:

The experience of Orrin Kohon, a Los Angeles resident with a second home in Hawaii, reflects the hurdles facing consumers hoping to join the rooftop movement. If all goes well, Kohon will soon receive local government approval to let workers mount an $18,000 leased solar power system on the roof of his Honolulu house. Monthly electric bills for his modest 1,750-square-foot abode run about $400—at 32.6¢ per kilowatt hour, the highest in the nation. With his rooftop system, installed by a third-party contractor, he’ll generate enough of his own power to lower that rate to 7.3¢ per kilowatt hour for the next 20 years. That’s a savings, he says, of $120,000 over that period. “It’s a hedge, like locking in $2-a-gallon gasoline,” says the 63-year-old owner of a Los Angeles career counseling service. “The thing is, I have to act now. If too many of my neighbors beat me to the punch, I won’t be able to connect.”

That’s because thousands of Hawaii residents have also realized that even the most elaborate systems, costing up to $55,000, can pay for themselves in as little as four years given current power rates and state and federal incentives that chop up to two-thirds off the installation price. This rooftop stampede is overwhelming the permit process—70 percent of all current permit applications in the state are for solar installations—and causing utilities to impose moratoriums in some areas on how much solar they are willing to accept to their power grids.

The rule of thumb had been that once rooftop installations made up 15 percent of the power on a given circuit, utilities could stay new connections until residents undertook an engineering study—costing as much as $50,000—that showed their addition wouldn’t destabilize the power grid. While that rule has been eased to 25 percent in Hawaii, the extra burden on consumers explains why “there are places on Maui where the saturation is such that we don’t even solicit for business there,” says Alex Tiller, chief executive officer of Sunetric, a Hawaii-based rooftop solar power installer.

The hidden costs of obtaining permits and regulators’ approval to install rooftop panels is a big reason the U.S. lags behind Germany, which leads the world in rooftop installations, with more than 1 million. The price of installed rooftop solar in Germany has fallen to $2.24 per watt. In fact, on a sunny day in May, rooftop provided all of Germany’s power needs for two hours. “This is a country on latitude with Maine,” says Dennis Wilson, president of the Mid-Atlantic Solar Energy Industries Association, a solar-installer trade group. “Germany is showing us what’s possible—if we can just get our act together.”

Oil, gas, coal, and nuclear utilities have to be told in no uncertain terms to step aside. They have had their run; if it is too hard for them to operate with competition, then they are no use to us. Let them close down and the government will pick up operations until enough solar is installed to shrink them into networking and backup operations.

It is time for solar. We have to commit, and commit big.

Categories: Technology Tags:

No, Outlawing It Isn’t Worse

October 19th, 2013 Comments off

At TPM, Cathy Reisenwitz made an argument that laws against revenge porn are worse than the problem itself. She begins with a disturbing hint that the law may be like marijuana laws that put people in prison:

The state of California can now add people who post naked photos of their former partners to its criminally overcrowded prisons if they do so without permission and with the intent to cause emotional distress or humiliation.

It seems to me that this comment is wholly unnecessary. If the prisons are overcrowded, we should not put people there who have committed awful crimes? And yes, revenge porn is that sort of crime. Not nearly as bad as rape, but definitely in the same category. I am perfectly OK sending such people to jail.

She then gapes in puzzlement that the law would do more:

Proposed legislation in New York would actually widen to the ban to include photos victims take of themselves.

Yes, that’s right. Just because someone took a selfie does not make it any better when the jilted boyfriend publishes it on the Internet. Why should it?

But that’s not Reisenwitz’s main objection.

While well-intentioned, this kind of legislation is over-broad, poses serious free-speech threats and may not even be necessary going forward.

The first thing it’s important to keep in mind is that revenge porn laws criminalize speech.

Huhwhat?

As the ACLU has discussed, such laws can be used to censor photos with political importance. As Jess Rem pointed out for Reason magazine, people such as Jeff Hermes, Director of the Digital Media Law Project at Harvard, share this concern about the law. Hermes has stated that revenge porn laws could have kept former New York Rep. Anthony Weiner’s (D) nude selfies legally suppressed.

Uh yeah, no. Aside from the less significant but still relevant points that (1) it is arguable that politicians’ personal sexual peccadilloes are really newsworthy, and (2) relevant parts of the photos can be blacked out or pixellated in the case that context is somehow deemed necessary, there are two reasons why this is not an issue.

First, it is not necessary to show the image in order to report on the story. Even if a story about a politician sexting someone is not gratuitous in and of itself, the photos certainly are. I would not deem it a great threat to free speech if the media were limited to only telling us about Weiner’s selfies rather than showing us the images.

And second, no journalist would ever be prosecuted for revenge porn that did not specifically involve them. To make the person who released them liable is not something that affects freedom of the press, any more than outlawing the release of classified documents did in cases like Edward Snowden’s.

Not to mention that these laws often include language that specifies the offender must have “intent to cause emotional distress or humiliation.” If someone releases photos of a politician in a state of undress, it could be for the purpose of revenge—but the claim could very easily be made that it was for the purpose of informing the public, thus making even the person releasing the images safe from prosecution.

In fact, the laws may not be strong enough. People wishing to release these images will probably find loopholes, like having a third party post the images on the Internet. If sharing the photos with a private third party is not illegal, and if the third party has no cause for humiliating the victim, then probably no case could be made.

Reisenwitz suggests, however, that there is enough legal protection without the new laws:

Civil lawsuits have always been available to victims. Late last year a Texas judge ordered an ‘indefinite’ lock on revenge porn site PinkMeth.com as Shelby Conklin sought “punitive damages of more than $1 million for intrusion on seclusion, public disclosure of private facts, appropriation of her name and likeness and intentional infliction of emotional distress.”

The case was eventually settled, and the offenders paid restitution instead of serving time in jail. This is just one example of the many successful lawsuits by victims of revenge porn.

Before the law, there were already at least seven different kinds of laws revenge porn could have violated, depending on the circumstances. They include but are not limited to laws dealing with extortion and blackmail, child pornography, invasion of privacy, copyright infringement, voyeurism, intent and violation of the Consumer Protection Act.

The first example Reisenwitz cites is clearly inapplicable. It was against the revenge porn site, not the person releasing the images. You don’t even need revenge porn sites to release such photos, and sites not intended for such photos could claim they had no idea of the photos’ origins. Also, many of the charges dealt with commercial distribution, something that would not apply to an individual posting to a porn forum. Not to mention that many such sites will not be within the courts’ jurisdictions, or that any victim making such a case will become a huge target for similar sites and their supporters.

The other laws? Extortion and blackmail laws would only apply if the jilted party made such threats, which is probably very rare. Child pornography would apply only in limited cases. I’m not a lawyer, but I would think citing invasion of privacy is pretty weak—people are not penalized for spreading personal information about exes, and the fact that the photos were consensual probably negates this as a possible legal avenue. Copyright infringement could apply to selfies, but not to images taken by the perpetrator—and it would be pretty difficult to assess financial damage if you had no intent to sell the images yourself, and the perpetrator did not profit themselves. Voyeurism is laughable in this context. As for the Consumer Protection Act, it relates to commercial profit, again not applicable to the individuals.

It’s pretty clear that these laws are insufficient. Mitchell Matorin has a much more detailed rundown.

No, the law is not worse than the crime. Not in the least. And frankly, laws against privacy infringement are far, far too weak in this country. As with all forms of intellectual property and information in general, we are in a new age, and the laws are too far behind. These new laws are not inappropriate, and in fact, we need a lot more regulating how information is collected, disseminated, and bartered.

Balking at making revenge porn illegal is, if anything, a frightening step in the wrong direction.

Categories: Social Issues, Technology Tags:

Oblivious to Your Surroundings

October 10th, 2013 Comments off

A terrible story out of San Francisco:

The man drew the gun several times on the crowded San Francisco commuter train, with surveillance video showing him pointing it across the aisle without anyone noticing and then putting it back against his side, according to authorities.

The other passengers were so absorbed in their phones and tablets they didn’t notice the gunman until he randomly shot and killed a university student, authorities said. …

“These weren’t concealed movements — the gun is very clear,” District Attorney George Gascon said. “These people are in very close proximity with him, and nobody sees this. They’re just so engrossed, texting and reading and whatnot. They’re completely oblivious of their surroundings.”

In a case such as this, of course it seems horrifying that a gunman who would eventually shoot someone would go unnoticed (although how the situation would have been improved if anyone had noticed is not exactly made clear). However, there is also a clear judgment being made here: that it is a bad thing, perhaps irresponsible, negligent, or asocial, to be engrossed in a technological device in a public place.

This is one of those time I have to roll my eyes and sigh out loud.

The context here is riding a train for long periods of time, something I do on a daily basis. Without some kind of diversion, you are just sitting there looking at nothing. 99.9% of the time, nothing is happening. People are just sitting and standing. This is not a social situation. No one is interacting—nor do people want it to be that way; you’re mostly surrounded by strangers, and people get annoyed when there is too much active talking. As a result, you mostly get people being there, silent, like they would be in an elevator. However, in an elevator, you’re just there for a minute, not an hour.

What are you supposed to do, remain quietly observant and vigilant in case someone brandishes a gun?

The clear implication is not only were people “oblivious” to their surroundings, they were pathetically or irresponsibly detached because they were engrossed in electronic devices.

This has become somewhat of a popular complaint for some time now. It stems from the idea that if people engage in some new portable personal entertainment, in particular of an electronic nature, while in public, it is assumed to be impolite, like you are shutting yourself off from others.

I have never respected that complaint. It’s as if the one complaining expects everyone else in public to pay attention to them. Why?

Sure, if someone keeps bumping into other people or causes some kind of damage, that would be different. But that’s regarding something that occupies your visual attention while moving or operating a vehicle. That’s a legitimate concern, and I fully agree with laws about texting while driving. Or if there is a social event where a person attending is supposed to be paying attention to others—a party, a meeting, even just a conversation—then yes, it’s asocial to instead be absorbed in something else.

However, that’s not what we’re talking about here. Train passengers are not driving the vehicle. Someone sitting alone in public, in a park, at a coffee shop, at a bus stop—these people are not expected to be engaged with anyone else.

So, how is it asocial or in any way wrong for these people to occupy themselves?

But that’s not the only point here—it’s not just occupying yourself, it’s doing it with some new device. Remember complaints some time back when Walkmans first came out? Same thing now. It’s the same scorn for technology that even still generates fear of crime on the Internet when the same crimes could just as easily happen in any other context.

Consider how different the reaction would be if people on the train with the gunman were reading books or engaged in conversation with someone they were traveling with. Would there be the same level of disdain, the same feeling of contempt? Almost certainly not. There would be more of a sense of shared horror, a feeling of sympathy rather than condescension. Like nobody in a movie theater noticing someone in the back row brandishing an Uzi, or no one in a library noticing the someone walking by carrying a handgun. It would be considered more proper.

But entertain yourself in public? With an electronic device? Of course not. You should be happy to ride a train for long periods of time with nothing to do. Jerk.

Categories: Social Issues, Technology Tags:

Not All the Bugs Worked Out Yet

October 6th, 2013 Comments off

Computers have the ability to analyze scanned printed words and convert it to selectable text. This is called Optical Character Recognition, or OCR for short.

However, even with relatively clean printed text to work with, OCR still often fails to render the text completely accurately; An r followed by an n or m can get confused, or a d could become a c followed by an l, making the word “down” into the word “clown.”

This one has to be my all-time favorite, however:

Ocrtext

Categories: Technology, The Lighter Side Tags:

Marketing Research Tells Us That You Want to Pay More for Less

March 2nd, 2013 2 comments

Many companies use focus groups, and give you what you say you want.

Apple forgoes focus groups, and gives you what you want but didn’t know you wanted.

Time-Warner Cable makes up their own facts, and tells you what you don’t want, even if you are sure you want it.

And what it says you don’t want is good service.

It should come as no surprise that TWC ignores popular demand and instead insists that no one wants Gigabit Internet:

Speaking at the Morgan Stanley Technology Conference, Time Warner Cable’s Chief Financial Officer Irene Esteves seemed dismissive of the impact Google Fiber is having on consumers. “We’re in the business of delivering what consumers want, and to stay a little ahead of what we think they will want,” she said when asked about the breakneck internet speeds delivered by Google’s young Kansas City network. “We just don’t see the need of delivering that to consumers.” Esteves seems to think business customers are more likely to need that level of throughput, and notes that Time Warner Cable is already competitive .

In case you didn’t notice the stench, that’s all a pile of something well-digested and fetid.

Wired nails it on the head:

Experts believe that this reluctance has less to do with a lack of customer demand and more to do with protecting high margin broadband businesses. Companies like Time Warner Cable make around a 97 percent profit on existing services, Bernstein Research analyst Craig Moffet told the MIT Technology Review this month. But Verizon is more interested in wireless broadband, on which it can make an “absolute killing,” by charging per gigabyte for usage, broadband industry watcher and DSL Reports editor Karl Bode told Wired earlier this year.

In other words, rather than spending money on high-speed networks due to customer demand, telecoms are instead cashing in on the least they can offer while charging ahead on technologies which offer the highest profit margins—and since they move in lockstep with little or no real competition, customers get no say in the matter.

Telecoms are like any other corporation: get as much money by any means necessary. In the early 1990’s, a variety of telecoms successfully lobbied states to get them to drop regulation limiting their profits on services that were, effectively, monopolies. In exchange, the telecoms promised delivery of near-universal 45 Mbps fiber-optic broadband throughout these states by deadlines ranging from the 90’s to 2015. Most states agreed, leading to hundreds of billions in extra profits for the telecoms—who soon after this killed most of their fiber-optic programs. The extra profits the telecoms have made over the past 20 years from the rate hikes they were allowed could have paid for nationwide broadband.

Do we have that? Not even close. And now these same companies are saying it won’t come because we don’t want it. Specifically, “We just don’t see the need of delivering that to consumers.”

Of course not. If they can charge $100 a month for 15 or even 5 Mbps in so many locations, who would want to spend all that gravy on new networks instead of simply running off with the cash?

These same profit-rich corporations are instead whining about how they don’t get to invalidate Net Neutrality and charge even more, once again claiming that they need to charge more so they’ll be able to invest in broadband—the exact same shell game they played on us before.

Meanwhile, they reveal that they have no intention of truly improving their networks to the extent people want and need.

No, consumers don’t want Steam to be able to deliver HD games quickly. They don’t want high quality video conferencing. But most importantly, consumers do not want, under any circumstances, Apple or Hulu or Netflix or Amazon to deliver 1080p video over Internet.

Because that would threaten the cable contracts many of the ISPs enjoy with millions of American households, leading to a la carte video services that could be cheaper and more convenient than the crap delivered now. No, consumers don’t want that.

Comcast in California does offer 105 Mbps speeds in major cities… for $110 a month, if you commit to long-term service ($200 a month if not). If you don’t live in a major city in most states, you could pay that much for 30 Mbps.

Here in Tokyo, KDDI offers Gigabit Internet for $60 a month. I know it’s not as expensive to wire up Japan as it is America, but you would think that at least in major Californian cities it would not be double the price for 1/10th the speed… essentially 20 times more expensive for equivalent services… after 20 years of overcharging for enough to pay for it all and then some.

Categories: Corporate World, Technology Tags:

Another Telecom Attack on Network Neutrality to Grab Profits and Suppress Free Speech—International Edition

December 6th, 2012 Comments off

There’s a conference in Dubai which is only now breaking out in the news. It’s a conference to discuss a 25-year-old international treaty on how the Internet works worldwide.

These people talk about how they only want to increase access to people in the third world, and make the Internet better for everyone:

“The brutal truth is that the internet remains largely [the] rich world’s privilege, ” said Dr Hamadoun Toure, secretary-general of the UN’s International Telecommunications Union, ahead of the meeting.

“ITU wants to change that.”

The people running the show claim they’re not doing any harm:

Gary Fowlie, head of ITU liaison Office to the United Nations, insisted in a phone interview that his organization’s effort to revise outdated telecom rules is not an attempt to change the way the Internet is governed.

“This whole idea there would be some kind of restriction on freedom of expression, it just doesn’t fly with what the ITU has stood for,” he said, stressing that as a U.N. entity, the ITU is bound to uphold Article 19 of the Universal Declaration of Human Rights, which guarantees the right to free expression through any media.

Sounds great, until you realize that “ITU” stands for “International Telecommunication Union.” That right away should be a giveaway. The next hint:

But the [ITU] said action was needed to ensure investment in infrastructure to help more people access the net.

Red flag time! Hear those alarm bells and sirens going off? Any of this sound familiar? “We want to help people get access to the Internet. That requires infrastructure.” This is the inevitable preface to the next statement: “We need money to do that. Let’s talk about how we can make more money.” And thus we arrive at the actual motive behind the lobbying, and upon closer inspection, find the justifications to be specious.

Yep. It’s just like when the U.S. Telecoms tried to gang up and buy their way to Internet ownership in the U.S. Virtually nothing is different: the telecoms are whining about how they’re losing so much money because of the big, bad Internet:

Some telecommunications companies are looking at WCIT as an opportunity to address the business reality that new technologies are severely eroding traditional revenues from old-style voice calls. Customers are no longer making phone calls as they once did, and are instead using an application layer on the Internet to carry voice and video. Landline services are increasingly being replaced with mobile communications services that are themselves increasingly being used to provide data connectivity. Beyond voice, the companies argue that large content providers are making revenue from customers’ access to those services over their Internet connections.

So these companies see this treaty as a way to “re-balance” revenue streams between carriers and “over-the-top” providers. Claiming that regulatory help is needed to ensure the ongoing investment in the Internet’s infrastructure, they have dusted off an old concept known in telecom circles as “sending network pays.” On its face, the idea is simple: The network or ISP of the sending party should pay for the delivery of their traffic (just as with cross-border telephone calls).

That’s the same bullshit argument made by the U.S. telecoms, the billionaire’s cry of poverty. “We’re losing revenue from people using Skype instead of making international phone calls, so we need to make up the money somewhere else.”

What a complete load of crap. As if these people are not making huge profits on all-new revenue streams in several different areas, many of which derive specifically from the Internet usage they now claim cannibalizes their revenues.

Let’s see. I pay for my Internet connection—a monthly fee which easily exceeds, by quite a bit, what I used to pay for my traditional land line. I also pay for cell phone use—in fact, I pay, in a way, for no fewer than three different Internet connections, one the aforementioned home connection, and two more times for the data plans for my wife’s and my own cell phone plans. Each of which costs about the same.

Repeat this throughout the entire world, and you begin to understand that the telecoms have never had it better. If they want to cry poverty, I demand they first cough up their balance sheets for close inspection. Because I will bet you quite a bit that their profit line is probably not very far behind Big Oil and Big Pharma.

Once again, they try to make it all sound more palatable by saying they are going after big corporations:

One of the other concerns raised is that the conference could result in popular websites having to pay a fee to send data along telecom operators’ networks.

The European Telecommunications Network Operators’ Association (Etno) – which represents companies such as Orange, Telefonica and Deutsche Telekom – has been lobbying governments to introduce what it calls a “quality based” model.

This would see firms face charges if they wanted to ensure streamed video and other quality-critical content download without the risk of problems such as jerky images.

Etno says a new business model is needed to provide service providers with the “incentive to invest in network infrastructure”.

Again, the same bullshit argument they made in the U.S. 6 years ago. And it’s still full of crap. They already have all the incentive they need to expand infrastructure. They already have huge profits. The content providers who send bandwidth-intensive content already pay for sending this data, as do the users who consume it. And we have seen before when Telecoms make promises to expand infrastructure in exchange for the ability to charge more, and they never do what they promise.

What they really want here is virtual ownership of the Internet. They want to be able to wring every last penny, yen, pound, and deutschmark that they possibly can by charging for something, then charging for it again, and then charging someone else for the same thing as many times as they can manage.

Piggybacking this wave are governments scared shitless over the freedom of expression the Internet represents and the threat this is to their control over their populaces, a fact that has not gone unnoticed by people who know the Internet better than anyone—like Vint Cerf, co-creator of the TCP/IP protocol and regarded as one of the “fathers of the Internet,” who wrote this message of warning:

Today, this free and open net is under threat. Some 42 countries filter and censor content out of the 72 studied by the Open Net Initiative. This doesn’t even count serial offenders such as North Korea and Cuba. Over the past two years, Freedom House says governments have enacted 19 new laws threatening online free expression.

Some of these governments are trying to use a closed-door meeting of The International Telecommunication Union that opens on December 3 in Dubai to further their repressive agendas. Accustomed to media control, these governments fear losing it to the open internet. They worry about the spread of unwanted ideas. They are angry that people might use the internet to criticize their governments.

The ITU is bringing together regulators from around the world to renegotiate a decades-old treaty that was focused on basic telecommunications, not the internet. Some proposals leaked to the WICITLeaks website from participating states could permit governments to justify censorship of legitimate speech — or even justify cutting off internet access by reference to amendments to the International Telecommunications Regulations (ITRs).

Cerf then urges us to remain vigilant against those in power corrupting one of the most invaluable advances in communications and freedom of expression in human history:

A state-controlled system of regulation is not only unnecessary, it would almost invariably raise costs and prices and interfere with the rapid and organic growth of the internet we have seen since its commercial emergence in the 1990s.

The net’s future is far from assured and history offers much warning. Within a few decades of Gutenberg’s creation, princes and priests moved to restrict the right to print books.

History is rife with examples of governments taking actions to “protect” their citizens from harm by controlling access to information and inhibiting freedom of expression and other freedoms outlined in The Universal Declaration of Human Rights.

We must make sure, collectively, that the internet avoids a similar fate.

Indeed.

Let me reiterate something I feel is very important: the Internet is the single most important advancement in communications technology in the history of the human race. More important than the printing press, more important than radio and television.

Why? Because the Internet is the first human technology which allows worldwide dissemination of speech and ideas which is not controlled by the wealthy.

Before the Internet, if you wanted to speak beyond the reach of your own voice, if you wanted to deliver an idea beyond just the few people you have contact with, if you wanted to speak to more people than you could gather in a local public place—you had to beg at the feet of the Gatekeepers.

The Gatekeepers are the ones who used to control communication. They are the publishers and the regulators. They are the wealthy and empowered who controlled all means of publishing content. Want to write a book? Not unless we say so, and thanks, we’ll keep almost all the profits for ourselves. Want to speak over a network? Not unless you can make us big profits, or lend a popular or sympathetic face and voice to opinions we wish to propagate.

Much of this was justified by the expense of said networks. Publishing books and building broadcasting networks isn’t cheap, and the available resources were few. So there was not much complaint about the lack of freedom to communicate.

However, the Internet changed all that. This blog can be accessed worldwide—and I don’t have to pay much to publish it. I can write almost anything I want, within reasonable law, and within seconds, people in Luxembourg, Hong Kong, Malaysia, and Israel can read it. I pay about $10 a year for the domain name, and maybe $100 a year for hosting; I could get cheaper pricing than that, or I could pay nothing and instead have a blog hosted by WordPress or some other blogging service.

This ability to speak to the world has never before existed.

That is a greatly unappreciated fact of the Internet: how it has opened the doors to potentially anyone in the world communicating with a large portion of humanity, openly, freely, instantly, and (usually) cheaply.

A freedom and availability that would be threatened if the ITU got what they wanted.

So pay no mind to the weeping billionaires and multinationals, or the angry dictators fearful of losing control. Disregard the claims of the super-rich telecoms crying poverty and claiming they only want to give fiber-optic to children in Africa. Ignore the claims of sock puppets for dictators that no one is trying to squelch freedom of expression. Recognize these for the obfuscation, distortions, and lies that they are.

And whenever possible, write to your legislatures: give the Internet to the Telecoms, we will vote you out. Threaten free speech over the Internet, and we will overthrow you. We have only just received this freedom, and we refuse to surrender it.

On Handwriting

October 14th, 2012 7 comments

Sullivan quotes Philip Maugham:

[T]here is good reason, argues Philip Hensher, for such a paradoxical evaluation of our handwritten style: “We have surrendered our handwriting for something more mechanical, less distinctively human, less telling about ourselves and less present in our moments of the highest happiness and the deepest emotion,” he writes, while simultaneously recognising that “if someone we knew died, I think most of us would still write our letters of condolences on paper, with a pen.” Hensher’s new book The Missing Ink: The Lost Art of Handwriting (And Why it Still Matters), rests on the argument that “ink runs in our veins, and tells the world what we are like”. Handwriting “registers our individuality, and the mark which our culture has made on us. It has been seen as the unknowing key to our souls and our innermost nature. It has been regarded as a sign of our health as a society, of our intelligence, and as an object of simplicity, grace, fantasy and beauty in its own right.”

I beg to differ. Handwriting is given status primarily because it was used for such a long time, and that was out of necessity, not preference. In that sense, it is like people preferring paper books over electronic media. As the modern alternatives push aside the old, those who prefer what they grew up with have a tendency to create ornate rationales as to why their outdated ways are superior, and bemoan their passing. I recall one person making similar claims about ebooks, saying that they lacked the “permanence” of books, in that electronic media is alterable. As if people often go about altering ebooks they read, or that it is impossible to alter printed material by reprinting it.

The fact is, handwriting is much more a chore than it is an art for most people. It can take years to learn and perfect, and many people never master it. And what art it may possess only exists because human beings have so imbued it. However, this art can be instilled far more effectively by the choice of phrasing than by the fact that it comes a little more directly from your hand.

Handwriting discriminates. It can be brilliant, artistic, illuminating— but only if you are skilled at it. Those who cannot master it as well are severely set back if handwriting is the only means of written communication. Their words, with value by themselves, are muddied by a discriminatory medium. Yes, the mastery of language use can also be discriminating—however, language use is at least necessary to communication. Handwriting is not.

Handwriting can also be a barrier to communication; no doubt you have encountered undecipherable scribbles, and have heard the almost clichéd stories of doctors’ unreadable scrawls. Instead of demanding that doctors learn penmanship, I would rather they spend more time learning how to be doctors, and use a keyboard to enter prescriptions.

Handwriting is the fashion industry of written communication: it is a superfluous and superficial art which can be expressive, but takes itself way too seriously. Just as the person who inhabits the outfit is far more important than the clothes themselves, the words and their meaning are what truly matter, the handwriting in which they are expressed being nothing but a decoration in comparison. And beneath the words, our feelings, choices and intent. Hensher’s conceit about expressing emotion is ill-considered, with the use of language itself towering over a lilt or a flourish of the pen. The worst handwriting in the world could be possessed by the most compassionate heart, articulating the most poignant or noble message. Handwriting can add a flair, but it can also rob us of expressiveness.

What it comes down to is the fact that the words, and the meanings they convey, constitute the soul of writing. Handwriting, in contrast, is almost frivolous. It is, in a sense, skin deep. Handwriting can add beauty, but barely any meaning. The great deal of time learning it can be better spent in other endeavors. Such as learning how to use words to express yourself—something that schools, ironically, have sometimes spent less time teaching kids than they have teaching them penmanship.

And what meaning it does add can be matched by fonts—perhaps even outmatched. Presidents and marketers alike choose fonts with great care to express their messages. Obama chose “Gotham,” a font reminiscent of city buildings, to express a sense of civil service, of community, of utility. Gotham is also sans-serif, a font category that implies a message of importance. His opponent in 2008, John McCain, chose Optima, a font associated with military service via its use in the Vietnam Memorial; this font was coupled with the use of a beveled nautical star, also with military connotations. Ironically—or perhaps not so—Optima is a “centrist” font, a sans-serif typeface which has hints of serifs. Romney, in the meantime, seems to have chosen a muddle of fonts which do not appear to have meaning directly relevant to his campaign—something telling to a designer. (Mitt’s team also seem to have forgotten that many fonts are not public domain.)

Each font has its meanings and associations. For example, I have done a good deal of hiring, and have found, in hindsight, that people who interviewed and later performed very well often had used Garamond as the primary typeface for their resumes. A humble yet elegant font, most people have it but almost never use it, and are unaware of how beautiful it can make a document look when used correctly.

Here’s the thing, though: with fonts, one can express a broad variety of associated meanings. With handwriting, you are more or less stuck with one style. While fonts can be easily learned and applied, handwriting takes great effort and practice, and yet is more limited in its ability to express specific messages.

Fonts are a great equalizer. They allow anyone to express through written language what only some can achieve by hand. Hensher’s implication that typography is “less human” is nothing but self-important hogwash. It’s like suggesting that one’s appearance is mechanistic and inhuman simply because you did not spend years learning how to make your own clothes. It’s like suggesting that it’s more human to make your own home or else you’re a soulless ant in a hive-like artifact, disregarding the fact that what happens within the home and what it represents to the people residing there has far more significance than the personalized shape of the moldings.

And signatures? Hah. I’ll be glad when they disappear. We’ll be far better off with biometric identification. In my job, I sometimes have to sign stacks of documents. I’m lucky if I can get a few signatures that really look the same. As a security measure, it sucks. I was at my bank a week or so ago, and had to sign a form. What followed was so absurd as to be almost comical: they told me the signature didn’t match well enough, so could I please add a little more to my first name? Oh, we need a dot above that “i.” And there should be a little hook at the bottom of that “P.”

Seriously. They spent about 5 minutes telling me how to forge my own signature.

No, handwriting is not some thing of unmatched beauty which is being crushed by robotic printouts robbing us of our humanity. Hensher says it is less mechanical, as if there is some inherent magic with piercingly specific meaning in slight variations in writing the letter “k.” He says typing is “less human”; well, so is driving a car over walking, and yet I bet Hensher thinks nothing of driving to the supermarket—also less “human” than a local family-run grocer. Or maybe I’m wrong, and the guy is Amish.

I will actually agree with him on one point: the use of handwriting in letters of condolence. But even that is more traditional than inherent, the current favor for such personal attention in crafting the letter being appreciated less for its intrinsic value than for its conventional meaning. When you think about it, such letters are actually less human than someone coming to you and delivering such a message in person. For those who desire permanence, electronic messages can be regarded as just as human, just as touching—for, as I stated earlier, the soul of a message is in the words chosen and their expression of human feelings and intent.

The only advantage I can see in handwriting is not from the art itself but from a by-product: the feeling of physical connection to the user. This paper with this message which I am holding right now was written by that person; it is a physical link which we, as humans, tend to appreciate.

But even that can be met almost fully with print—by printing the damned thing out. Sign it if you must, but still the paper comes from that person no less, was held in their hands and traveled to yours. Not as attractive for the traditionalists, perhaps—but this advantage, as far as I am concerned, pales before the advantages of “mechanical” text.

Tell me—were this blog post written by hand and scanned for display, would it be more meaningful? Or would you be just as likely to think, “Jeez, I’d rather not spend the extra effort to read his handwriting. Why didn’t he just type it?”

And if you think that print is less enthralling, then explain to me why literature is not considered some lifeless, inhuman art as a result of the fact that it is printed?

Handwriting, whether art or chore, is departing, and as far as I am concerned, mostly for the better.

Categories: Social Issues, Technology Tags:

Samsung Galaxy S III and the iPhone 5

October 7th, 2012 2 comments

The Samsung Galaxy S III is currently at the #2 spot in smartphone sales in Japan, with the new iPhone 5 in the #1 spot. Admittedly, the iPhone 5 just came out, while the S III has been out since May.

On the other hand, the iPhone is counted as 6 different phones—once for each carrier (SoftBank and Au), and once for each capacity (16, 32, and 64 GB). That’s why, in addition to holding the #1 spot, the iPhone 5 also holds the #3, #5, #7, #8, and #10 spots as well—6 of the top 10 spots on the best-selling list. The reason this unbalanced reporting is done is to prevent the iPhone from always being #1; ironically, it still gets to the #1 spot, and with new model releases, dominates the whole top ten.

The Galaxy S III, despite having two capacities, is listed as a single phone, thus strengthening the relative position in the ratings compared to the iPhone. That is likely the reason why the Galaxy S III is shown as beating out the old iPhone 4S, which still occupies the #4 and #9 spots, in addition to the #16, #22, #48, and #59 spots. Were the iPhone 4S to be counted as one phone as the Galaxy S II is, it would almost certainly take over the #2 spot from Samsung’s model.

The Galaxy S II, similarly, has multiple carriers, also not divided, thus giving it an advantage against the iPhone 4S, which also is listed as six different models. The S II, however, despite being a newer phone than the iPhone 4S, is languishing at #41 on the list.

This gives me the opportunity to also mention the little war that’s been going on between the two manufacturers, a kind of mini Mac-PC war, with users battling it out.

Overall, the fighting is silly. Choose the phone you like, and enjoy it. That’s what I tell my students when we talk about operating systems; they ask which is better, and after listing the advantages and disadvantages of each system, I conclude by asking them simply, “Which do you like better? Which one feels more comfortable to you? Are you satisfied or dissatisfied with the one you are using?” And then I point out that a lot of the determination is subjective, and is simply a matter of preference. The same holds for the cell phones.

What annoys me, however, is when people repeat Samsung’s pithy assertion that “Apple patented the rectangle.” A lot of trolls use it in discussions, and you know you have to ignore these pinheads. Nevertheless, it’s out there and should be addressed. Obviously, phones were already rectangles before the iPhone came out; to suggest that Apple’s innovations were so general and unworthy of note is laughable. Remember what “smartphones” were like before the iPhone? Probably you don’t; it’s easy to forget how hopelessly bad they were. Apple went over virtually every tiny little aspect of their design and function and remade them, most of these changes being significant—or at least significant enough for most cell phone makers to copy or imitate them.

Ironically, it was one of Samsung’s own documents that showed this up—a 126-point slide presentation showing how the iPhone’s design was better than Samsung’s S1, and how Samsung should copy Apple’s design decisions on each of these points. Here’s a representative slide:

Point126

Ironically, two of the points express how Samsung should copy the iPhone’s design, while a third notes that an effort should be made to avoid looking like they were copied. In short, copy the elements which make the iPhone stand out, then change the appearance enough so that it doesn’t look too blatant. Copy but don’t look like you’re copying. Little wonder Samsung lost in the U.S. case, and yet telling that it didn’t lose in Korea, not to mention elsewhere.

SamsungadAs a result, when one sees someone holding a cell phone nowadays, one often has to look carefully to determine whether it’s an iPhone or something else. Admittedly, the Galaxy S III is visually different to a greater degree, although I was chagrined and amused to discover that in my initial viewing of the ratings list I had mistaken the S III for another iPhone. Seriously.

Samsung also went on the offensive with an ad showing how much better the Galaxy S III is than the iPhone 5, at right (click for the full-size version). One may note that they used differently colored phones, and keep the iPhone off while the S III is on. I confused the two in the ratings list because both were black and shown activated. I don’t think it was a random choice to show them that way in this ad. It would have looked a lot worse for Samsung had they been side-by-side, both the same color, and both turned on.

The ad made these comparisons:

Samsungadtext

Samsung actually has some points here, but to a knowledgable observer, it’s clear that they’re not going for actual advantages, but instead are aiming to pad the list.

The screen is one point of difference, but is listed three times. The S III has a 4.8“ AMOLED screen at 1280 x 720, whereas the iPhone 5 has a 4” Retina screen at 1136 x 640. The final point—the resolution—is the only significant difference in most cases. People like big screens, but they also like small profiles. AMOLED gets you better contrasts and deeper blacks with lower power consumption, but Apple’s display has been rated as the best-quality in a broader range of points. And in the end, few will notice the difference in resolution. Advantage goes to the S III in most cases, but not by much.

Another three points are about the battery. The S III has more standby time. However, how many people let their phones remain idle for more than ten days? How many don’t recharge every day or two? Samsung brags about battery life in use; in some tests, the S III’s battery lasted longer, though nowhere near as much as advertised. These running times vary, and the advertised times are based on settings at minimum, which do not reflect real-world use. When the screens are set to maximum brightness and LTE is used, in fact, the iPhone 5 battery actually lasts longer than the S III. In normal use, the battery is more or less a wash. The only significant difference comes with the point Samsung moved to the end of the ad: replaceable batteries. If you find yourself forgetting to charge at night, or are such a heavy user that you run out of battery before you get home, this can be a huge difference (albeit a greater cost), but most people don’t need it. Advantage goes to the S III, but again, not by much.

The Samsung has 2 GB or RAM compared to 1 GB on the iPhone 5. An advantage, but then again, Android uses more RAM, making it more of a wash. Currently, the iPhone 5 runs perfectly well with the 1 GB, making the difference meaningless. However, in a few years, the new OS versions and software will tax that 1 GB. Advantage goes to the S III; by how much depends on the actual RAM requirements of software used. It should be noted that some variants of the S III only have 1 GB, however.

The real advantages of the S III are the removable battery, the ability to use SD storage in a meaningful way, and the larger screen, for those who like that and are willing to put up with the disadvantages involved (increased size and weight, less battery life). NFC is a possible advantage, depending on whether or not you can use it.

Some points are a wash; both do 4G LTE, both record 1080p video. The OS (iOS vs. Android) is a matter of preference.

Other points? Apple wins on weight and dimensions. You might note that Samsung “overlooked” the physical dimensions. The iPhone 5 is notably smaller in all three dimensions: 4.87 x 2.31 x 0.3 inches (123.8 x 58.6 x 7.6mm) for the iPhone 5, and 5.39 x 2.80 x 0.34 inches (137 x 71 x 8.6mm) for the S III. If you give the S III points for screen size, you have to give points to Apple for profile. Advantage goes to Apple, depending on preference.

Samsung’s ad also notes Siri, pitting it against Google’s “S Voice.” According to those who have used both, Siri wins hands-down.

Amusingly, Samsung touts their own “Standard micro-USB plug,” while calling Apple’s connecter “a totally different plug.” After having used it, I must say I love the fact that you can plug it in either way; I used to struggle with directionality a lot, and still do on the iPad. It’s a pain when you’re doing it just as you’re falling asleep, for example; it wakes you up. True, Apple is hogging all the revenue for the new connector, denying cheap copies to be sold for a while. But Samsung’s main charge, that it’s different, is bogus on the fact that Samsung has changed their own connectors more than a dozen times in the past 10 years; this is the first time Apple has change the plug in a decade. I would call this a wash.

After this in their ad, Samsung then proceeds to list 14 different features presumably unmatched by anything Apple has. As noted above, only two are significant: the NFC and the removable battery. Almost all the rest are specific features residing in a category which, if honestly compared with the iPhone, should allow for dozens more Apple features to be mentioned. I mean, really, “Tilt to Zoom”? “Turn Over to Mute”? Many of these are trivial at best.

How about iCloud built in? Shared photo streams? iMessage allowing texting to expand to other devices? Airplay video streaming? Find my iPhone? Apple’s VIP Mail feature, or “Do Not Disturb”? Facetime? These don’t count? Apple’s 700,000 apps don’t count? (OK, maybe 100,000 when you subtract fart apps. Ditto for Android, though.)

Then there’s security. Even with a jailbreak (which cancels out many of Android’s advantages), the iPhone is likely to be more secure.

Then there’s the hardware. Samsung uses plastic; Apple uses metal. I have never liked the cheap plastic feel of so many phones (including when Apple used it), and much prefer the more solid construction. Both use glass, but in drop tests, Apple fared far better than Samsung.

When I have been able to get my hands on an Android phone, I always test the touchscreen. Apple is noted for having the best sensitivity and fine control, and it shows. Relative to using the iPhone, I have trouble using screens of competing phones, and have seen the owners of these phones experiencing the same difficulty.

I have wanted to do a side-by-side with the S III, but ran into another difficulty: I couldn’t find anyone who had one. It made me wonder if it had come out already, but yes—it has been out since May.

And that’s what it really comes down to: preference. And back to: sales. See the ratings list I started this post with. Apple is hands-down the winner in terms of popularity.

One thing that I regularly do when I ride the train is to try to note cell phone use. In Japan, at least half the passengers are using them, or so it seems. When I do a count—how many are using the iPhone versus any other phone—I regularly come out with about the same result: about half the phones I see in use are iPhones. That’s versus every other maker combined.

In a country where the iPhone was supposed to be an abject failure, that’s saying something.

Categories: Gadgets & Toys, iPhone, Technology Tags:

Trailer: Seven Minutes of Terror

June 25th, 2012 3 comments

These people are fracking insane, while still being unbelievable geniuses. If they pull this off, the general reaction will probably be, “Oh, another Mars rover.” This trailer, however, shows how utterly fantastic the challenge is. If it works, everyone should be impressed as hell.

Categories: Science, Technology Tags:

Surface Surfaces

June 19th, 2012 12 comments

Is the new Microsoft tablet a winner? Possibly. There’s still a lot that’s not known about it.

On the one hand, Microsoft seems to be presenting something very different from Apple: a full-fledged PC in tablet form. Apple comes close with the Macbook Air, but that is clearly more in the netbook/notebook side. Microsoft’s new “Surface” tablets are definitely on the tablet side.

This could be good, or it could be bad. Microsoft has always, from the very beginning, pushed tablets as regular computers, failing each time because the technology was never good enough to produce a tablet computer. Apple won that game by first waiting for minimally usable technology to evolve before presenting a product, and then presenting it for what it was best suited for instead of staying trapped in the personal-computer paradigm. Companies other than Microsoft made the same mistake with netbooks, and Apple showed them up by waiting until they could make the Macbook Air.

This time, however, Microsoft could come up a winner: the technology may be more than strong enough to support a full-fledged PC in tablet form. If it is, then Apple could be in some trouble, as it has not yet merged its mobile and laptop environments quite enough to produce a Mac-like tablet–a tablet which fully supplants a laptop computer.

On the other hand, Apple may know what it’s doing. Despite some people desperately trying to use the iPad as if it were a laptop, most people are more than satisfied with it being a handheld media device. So the question is, will the tablet form work for a full-fledged PC? It might, but we just don’t know yet. Microsoft strictly limited hands-on access to the device, allowing reviewers only a minute or two with a device, and only the lower-end “RT” model. There were no hands-on demos of the keyboard.

This might be because some of the hardware is not actually ready. Remember when Microsoft previewed the Windows Series 7 Phone? Their “hands-on” presentation was to have trained users walk around and show visitors how it was used–and it was a complete disaster. The live-use demos were atrocious.

Which leads to other caveats. Microsoft is not releasing any information on pricing. Why not? Will Microsoft try to pit this against the iPad, taking only minimal profits? Or will it try to match the Surface against the Macbook Air? I can only imagine that the lighter model will be priced low, and the “Pro” model will be in the thousand-dollar range at the high end.

What Microsoft seems to be doing is telling everyone the good news before they hear the bad news–carefully controlling all information so that people only know what’s great about the new device, thus generating excitement–and only later, after (Microsoft hopes) people have formed a solid opinion about and desire for the tablet, quietly disclosing the bad news.

Even more suspicious is that there was no information, not even a hint as far as I could tell, of a release date. Microsoft is famed for introducing fantastic-looking stuff and then not actually releasing it for a long time. When Apple gives a sneak peek, they always give a time frame, even if just a quarter. As far as I can tell, Microsoft has not even given a year yet, though 2013 is a safe bet.

There is the usual Microsoft fan base (and/or the Apple Hating crowd) which more or less automatically proclaims anything Microsoft releases as the best thing since sliced bread; this has to be taken into account when reading what people are saying. The lack of data really makes it impossible to be certain about this product, meaning that anyone who currently claims it will be a hit or a dud is whistling in the dark, at best.

I would normally be tempted to say that Microsoft initially releases a piece of crap but then improves on it, evolves it, and eventually has a solid product. However, the Zune kind of belied that; Microsoft no longer makes a music player. It could be said, however, that the DNA from Zune lives on, in Windows Phone 7 (still not doing well with an embarrassing 4% 20 months after release), Metro, and now this tablet.

One telling point is that this is not the first PC-ish tablet to challenge the iPad. Tablets have come out with laptop CPUs, laptop amounts of RAM, USB ports, sexy designs, nice peripherals, etc. None have made a dent in the iPad. This can’t just be another full-featured tablet, it has to have something that will jolt people and make them want it, even need it.

Again, this could be an iPad killer. Given Microsoft’s track record, however, that’s not the safest bet in the world. Microsoft has gotten great hype upon announcing this kind of thing (originally, the Series 7 Phone was touted as the best thing since sliced bread even as it fell apart in the hands-on demos), only to have the most serious problems–that of the whole user experience–to sink the project upon release.

None of this is to say that Microsoft can’t make anything successful in hardware–the Xbox is successful, for example–but it would be wisest to refrain from any conclusions at all until people get a chance to take it home for a week, or even just play with a for-market version for an hour, unsupervised or otherwise constrained by Microsoft PR hacks.

Categories: Technology Tags:

And Steve Jobs Invented Computers

June 19th, 2012 2 comments

I was reading up on the new Microsoft tablet, and found this paragraph in a story from one of the major networks:

The company has been hit and miss in the hardware market, and when the company misses, it does so epically — remember the Zune? But Microsoft’s hardware successes have become billion-dollar innovations, such as the Xbox or the mouse, which Microsoft pioneered.

Yes. Microsoft pioneered the mouse.

Forget about Douglas Engelbart inventing it in the 60’s. Forget about Xerox being the first major company to design a GUI computer using one. Forget about Apple being the first to successfully deploy it, putting it in the public consciousness. Forget about Microsoft not even being in the hardware business until much, much later.

The Xbox has seen success, but if you pile up Microsoft’s successes and failures in hardware, it’s kind of hard not to notice the dominance of the latter category.

Then I saw who published the story. If they get everything else wrong, why not this? Small wonder they don’t allow comments on their stories….

Categories: People Can Be Idiots, Technology Tags:

The Next Step in HD TV (Long Post)

May 28th, 2012 2 comments

Today I visited the Open House for NHK labs in Setagaya to get a sneak peek at the new “8K” UHDTV (Ultra High Definition TV) standard, known in Japan and Super Hi-Vision. They had their 145“ super-LCD screen going, in full 7680 x 4320-pixel glory.

Screen01

The system is not just 16 times sharper than your latest-model HDTV; aside from having 16 times the pixels, it’s also progressive scan (not interlaced), and it’s got a refresh rate of 120 Hz. In short, it looks great.

Confused by the tech talk? Let me see if I can explain it.

First, let’s begin with some basic display vocabulary: scan, scan lines, interlaced and progressive scan, refresh rate, pixels, resolution, and aspect ratio. Let’s also go back to the earliest standard TV sets as well.

We refer to scanning in a television because of how the century old (!) TV technology works.

Farnsworth

That would be a CRT or Cathode Ray Tube. This was a glass vacuum tube with up to three electron guns (red, green, and blue for color) in the back. These guns would fire electrons at a phosphor (light-emitting) screen, in a rectangular shape called a raster.

If none of that makes any sense to you, then forget it. Just remember that the guns in the back of the tube fire energy at the screen to make it light up. But they do so in a pattern. They start at the top left, and slowly go left to right, painting a single line of the picture across the screen. When the right end of the line is finished, the guns would go back to the left side and start painting the next line. They do this over and over again, through hundreds of lines. All this in a fraction of a second.

This process was called scanning; each line was a scan line. When the guns finished at the lower-right corner of the screen, a single scan was complete. The number of scans per second is measured in hertz (Hz).

Earlytv

With the technology available in the mid-20th century, scanning each line in order didn’t work well; the picture did not show motion well, and there was flicker. As a result, they came up with a display method called interlaced scan. It fixed these problems, though it also had a disadvantage: it is not as sharp as it could be; small text, for example, often appears fuzzy.

Interlaced scan means that the guns only painted every other line in each scan–that is, on the first scan, it would paint lines 1, 3, 5, 7 and so on; on the following scan, it would fill in the missing even-numbered lines, 2, 4, 6, 8 and so forth. In this way, a single full image took two scans to complete.

Interlaced Fields

The number of scans per second was called the refresh rate. This was set at 60 Hz; because of interlaced scan, this meant that 30 frames per second could be shown.

Interlaced scan was not the only way to show an image; progressive scan paints an entire image, all the scan lines, in one scan. Once the problems with motion and flicker were resolved, progressive scan was used in computer monitors, giving them a much better image.

The number of scan lines–the resolution–also had to be decided. The television most people grew up with originated in the 1940’s and 50’s. In North America, the NTSC (National Television System Committee) settled on a standard of 525 horizontal scan lines for the TV, although only about 480 lines are visible, and the other 45 lines are used for other information, including closed captions.

This picture is equivalent to an image on your computer screen 480 pixels tall, the vertical resolution. The horizontal resolution ranges from 640 to 720 pixels, depending on the type.

The aspect ratio (horizontal-to-vertical ratio) is 4:3, although a 720 x 480 screen would be 3:2.

OK, now that we’re through with all that, what did the old NTSC standard of 480 lines look like? Well, here’s an image with 480 ”lines“ of resolution in the NTSC aspect ratio:

480P

Looks OK, doesn’t it? However, there’s a catch–the image you see above is shown in progressive scan, so it looks sharper than it should. Still, that’s fairly close. This is what we used to think of as a clear, sharp TV image.

However, there’s another hitch: you’re looking at it in a very small space.That image might occupy only as much as 7 inches diagonally. Blow up the same image and paste it on a 40-inch TV, it won’t look so good.

We found this out at around the turn of the century, when the next-generation of TVs, called HDTV (High Definition TV) came out. (Japan calls this ”Hi-Vision.“) These TVs have a vertical line count of 1080. Since we use LCD screens, and they use pixels, we refer to the overall resolution as 1920 x 1080.

So before, we had 480i (480 lines interlaced); with HDTV, we got 1080i. That’s more than double the lines, and (because the screen has a wider aspect ratio) almost 7 times more information.

Now, I can’t show you an HDTV image on this screen, as it likely would be bigger than your display area (here’s such an image you can view separately). So instead let’s scale things down to about 1/3rd the height, or about 1/8th the area. Of the two images below, the one on top is the same 480i NTSC image scaled down, and below it, an HDTV (1080i) version of the same image. Were the two TVs to have the same ”pixel“ size, this is how they would compare. Note also the difference in aspect ratios:

Ntsc-230X172

Hd-690X388

As you can see, you’re getting a lot more image with HDTV, even if your newer TV doesn’t look that much bigger.

Now, look what an old 480 NTSC image would look like on a newer HDTV screen, with the 1080 image next to it for comparison:

Ntsc-In-Hd

Hd-690X388

The old NTSC image looks kind of fuzzy in comparison, doesn’t it? Now, keep in mind that you are looking at it on a progressive-scan screen at a small screen size! On a real HDTV, it would look even worse. That’s what you see on your new HDTV when they broadcast an old-timey teevee show!

So you can see that HDTV was a big improvement. Even more so was Blu-ray; instead of showing images in 1080i, Blu-ray shows them in 1080p. That’s the best quality you’ll see on your current TV. It also might even be better quality than some films shot on 35mm film in the old days, which is why Blu-ray doesn’t always seem to give you the ”best“ quality when you’re watching films from a long time ago.

Interestingly, George Lucas used a 1080p digital camera to shoot Star Wars: Attack of the Clones. While 35mm film can be better quality than 1080p, under some circumstances they are close enough that most people would not notice the difference, especially with processing that films go through.

So, is 1080 the end? Not by a long shot.


In fact, we’re now heading into perhaps as much as two generations beyond HDTV. They are referred to as 4K and 8K, or QFHD and UHD TV.

The 4K, or QFHD (Quad Full High Definition) is 3840 x 2160, which is exactly double the vertical and horizontal resolution of HDTV (also called ”FHD,“ or ”Full High Definition“), which gives it 4 times the pixels, or overall resolution. Thus the ”Quad“ label. Current HDTV has about 2,000,000 pixels (2 megapixels); 4K has more than 8,000,000, or 8 megapixels.

The ”4K“ label, by the way, does not come from the ”quad“ label; 4K comes from the rough number of horizontal pixels. 3840 is close to 4000, therefore we get ”4K.“

4K is just now becoming available; you can actually buy 4K Blu-ray players and 4K TV sets. HDMI cables are now capable of transmitting 4K video. And 4K is actually closer to a cinema standard–a movie shot in 4K video (as many are now) will look just as good as any shot on film.

However, there’s a catch: 4K TV sets (projectors, really) still cost at least $10,000, even if the 4K-ready Blu-ray players can be had for much cheaper. Oh yeah–there’s nothing to watch in 4K anyway. However, 4K might be good for HDTV-quality 3-D viewing, although that’s a limited use for expensive equipment.

Not that 4K won’t become cheaper and more available in the next few years. The problem is, by the time it catches on, it’ll already be obsolete.

You see, NHK here in Japan is working on 8K: a full 7680 x 4320 pixels, more than 33,000,000, or 33 megapixels. That’s more than 100 times the number of pixels on a pre-HD television set! Not only that, it’s progressive scan. And it scans 120 times a second (120 Hz), so you get sharpness even with motion that would blur on current TVs.

Movingtext

Still not impressed? Let me show you a scale showing all the different resolutions:

8K

See that tiny orange scrap at top left? That’s your old NTSC TV set. The higher-quality version of it, with top DVD quality. The screen two levels down, the darker green one, marked ”HDTV 1080p“? That’s your current flat-screen set. The light green square is 4K, what is coming out right now. The largest light-blue square is Super Hi-Vision.

They say that it will be ready for broadcast from NHK’s satellites from 2020 (give or take a few years). By 2025 they expect to broadcast that over the Internet.

Not only that, the sound will improve. Today’s ”home theater“ systems include 5.1 surround sound, meaning there are five speakers surrounding the viewer, and a subwoofer for bass.

Super Hi-vision has a 22.2 sound system–yep, 24 speakers in all. 9 at the top of the room, 10 around the middle, and 3 normal speakers in front, accompanied by two subwoofers.

Nhkspeakers01

Nhkwoofer01

And still, we’re not finished. After all, you can’t just go and increase resolution by 16 times and expect it’ll still fit on the same media, right?

When DVDs were too small for recorded HDTV, we got Blu-ray, going from 4.7 GB for DVDs to 25 ~ 100 GB for Blu-rays. Even with compression, however, Super Hi-vision will require media that’s 250 GB in size, at least.

Now, Blu-ray might get there–it’s 25 GB per layer is already at 100 GB thanks to 4-layer discs, and 10-layer discs are not too far off. However, it’s not just the capacity: it’s the access speed. If the media can’t shoot out the video fast enough, it won’t work. And the guy I spoke to at NHK (several of them spoke pretty good English, not surprisingly) said that even with upgrades, Blu-ray just won’t cut it.

So NHK is looking into alternatives–like this:

Nhkdisc01

Note the NHK disc is floppy, not unlike the opaque black mylar film used in the original floppies decades back. But this disc (which will probably be more firm when released) holds 100 GB per layer due to a lens process with blue lasers which halves the width of the beam, thus producing 4 times the capacity. A 4-layer SHV disc would hold 400 GB, more than enough for a SHV video.

Shvmedia

But that’s not all. They’re also working on holographic media:

Nhkholodisk

See those tiny little dots? Each one is 10 MB of data. Or was it 100 MB? Frankly, I forget–I didn’t write it down. Whatever the case, the guy said that one of those inch-square plastic (glass?) chips would hold as much as a terabyte of data. He also said that it wasn’t reliable enough for data storage yet–but he did say that he expected it to be sold on the market within three years.

He even had a cool laser setup you could look at:

Nhkholodisk02

That disc in the middle is not the media–the square chip on it is.

Now, when I saw this, something immediately came to mind. Data storage media, in the form of thick plastic cards… where had I seen those before…?

Trekmicrotapes

Of course, they seemed to have far less capacity back then.

Categories: Technology Tags:

SOPA, PIPA Shelved

January 21st, 2012 2 comments

The bills are in storage but not necessarily dead. Their seemingly inevitable momentum, however, is, at least for the moment, halted.

Ironically, these bills, which are supported by both political parties but overwhelmingly by conservatives, was taken down in no small part by something conservatives would have expected to come from the other group of business interests: a corporate strike right out of Ayn Rand’s Atlas Shrugged.

The thing is, the biggest giant to go on strike and stop producing for society was Wikipedia, a not-for-profit foundation. Yes, other for-profits joined in, like Google and Facebook, but those giants did not shut down, probably for the same reason true Randian corporate strikes never happen: they don’t want to stop making money.

Alas, the politicians are doing little but playing an evasive waiting game, knowing that momentum like we saw recently is hard to build, and they can just quietly come back to this issue in weeks or months. Hopefully, the protests will not subside.