Why The Tech Patent Wars Exist [INFOGRAPHIC]

Why The Tech Patent Wars Exist [INFOGRAPHIC]: "

This image has no alt text

I’ve really had enough with this whole tech patent hoopla. It seems that since computers were invented, tech companies throw billions of dollars at lawyers whose job is to file for every patent humanly possible and remotely achievable, understanding the Patent & Trademark Office must be run by geezers who think everything that crosses their desk is groundbreaking world wide intertubes glory.

After seeing this patent application filed by Google in 2000 I’m convinced that every major tech company has an entire business division dedicated to the following strategy:

This is based DIRECTLY from the application filed by Google that attempts to patent ‘Systems and methods for enticing users to access a web site‘:

Click this… uhoh would that be patent infringement?

I propose someone submit a patent for submitting stupid patent requests. The above infographic could be submitted as the ‘invention’. Then, anyone who submits a dumb patent would be sued for infringing upon my genius patent, thereby preventing others from filing dumb patents because it wouldn’t be as lucrative unless they actually had an original/innovative/patentable idea.

Sorry… all these patent lawsuits on flimsy or baseless patents are extremely annoying. Am I alone?

"

 

(Via Android Phone Fans.)

Scientists create 10 billion qubits in silicon, get us closer than ever to quantum computing

Scientists create 10 billion qubits in silicon, get us closer than ever to quantum computing: "

qubits

We are totally ready for a quantum computer. Browse the dusty Engadget archives and you'll find many posts about the things, each charting another step along the way to our supposed quantum future. Here's another step, one that we think is a pretty big one. An international team of scientists has managed to generate 10 billion quantum entangled bits, the basic building block of a quantum computer, and embed them all in silicon which is, of course, the basic building block of a boring computer. It sounds like there's still some work to be done to enable the team to actually modify and read the states of those qubits, and probably a decade's worth of thumb-twiddling before they let any of us try to run Crysis on it, but yet another step has been made.

[Image credit: Smite-Meister]

Scientists create 10 billion qubits in silicon, get us closer than ever to quantum computing originally appeared on Engadget on Fri, 21 Jan 2011 09:33:00 EDT. Please see our terms for use of feeds.

PermalinksourceMSNBCEmail thisComments"

 

(Via Engadget.)

Strait Power turbine is water-powered, shark-inspired (video)

Strait Power turbine is water-powered, shark-inspired (video): "

Strait Power turbine is water-powered, shark-inspired (video)

The basking shark, with its five foot jaw, is one of the most ferocious looking critters that ever swam the sea. However, it's pretty much harmless, just filtering out tiny bits and leaving idle dippers and their water wings alone. This is what served as the inspiration for Anthony Reale, who turned that gaping maw into Strait Power. It's effectively a double-nozzle that fits around a hydro turbine or two, turning the flow of water into electrical power, boosting the efficiency of the turbine by creating areas of high pressure ahead and low pressure behind, as visualized above. The result was a 40 percent boost in efficiency -- and some soggy jeans, as you can see in the videos below. The first gives a quick overview, the second an uber-detailed discussion of the development from start to finish. Choose your path.

Continue reading Strait Power turbine is water-powered, shark-inspired (video)

Strait Power turbine is water-powered, shark-inspired (video) originally appeared on Engadget on Sat, 15 Jan 2011 01:36:00 EDT. Please see our terms for use of feeds.

PermalinksourceMichigan Engineering LabLogEmail thisComments"

 

(Via Engadget.)

Kinect hack turns you into a punching, waving MIDI controller (video)

Kinect hack turns you into a punching, waving MIDI controller (video): "

If you're looking for an awesome, impractical way to make music with your computer (and who isn't?) please direct your attention to the following Kinect hack. Shinect, the brainchild of a YouTube user named Shinyless, uses motion detection to turn you into a MIDI controller! The current implementation gives the operator two virtual pads that can be activated by the old Jersey Shore fist pump -- and if that ain't enough, the sounds can be pitchshifted by raising / lowering the other arm. Pretty sweet, huh? This thing uses OpenNI, and while he's demonstrating it using FruityLoops it should work with any MIDI device. Things are pretty rough'n ready at the moment, although he promises big things in the future. In the meantime, check out the proof-of-concept in the video after the break.

Continue reading Kinect hack turns you into a punching, waving MIDI controller (video)

Kinect hack turns you into a punching, waving MIDI controller (video) originally appeared on Engadget on Mon, 03 Jan 2011 17:59:00 EDT. Please see our terms for use of feeds.

PermalinksourceKinect HacksEmail thisComments"

 

(Via Engadget.)

The AI Revolution Is On

The AI Revolution Is On

Artificial intelligence is here. In fact, it's all around us. But it's nothing like we expected.

Today’s A.I. bears little resemblance to its initial conception. The field’s trailblazers believed success lay in mimicking the logic-based reasoning that human brains were thought to use. 
Photo: Dwight Eschliman; Illustration: Zee Rogér

Diapers.com warehouses are a bit of a jumble. Boxes of pacifiers sit above crates of onesies, which rest next to cartons of baby food. In a seeming abdication of logic, similar items are placed across the room from one another. A person trying to figure out how the products were shelved could well conclude that no form of intelligence—except maybe a random number generator—had a hand in determining what went where.

But the warehouses aren’t meant to be understood by humans; they were built for bots. Every day, hundreds of robots course nimbly through the aisles, instantly identifying items and delivering them to flesh-and-blood packers on the periphery. Instead of organizing the warehouse as a human might—by placing like products next to one another, for instance—Diapers.com’s robots stick the items in various aisles throughout the facility. Then, to fill an order, the first available robot simply finds the closest requested item. The storeroom is an ever-shifting mass that adjusts to constantly changing data, like the size and popularity of merchandise, the geography of the warehouse, and the location of each robot. Set up by Kiva Systems, which has outfitted similar facilities for Gap, Staples, and Office Depot, the system can deliver items to packers at the rate of one every six seconds.

The computers are in control. We just live in their world.

The Kiva bots may not seem very smart. They don’t possess anything like human intelligence and certainly couldn’t pass a Turing test. But they represent a new forefront in the field of artificial intelligence. Today’s AI doesn’t try to re-create the brain. Instead, it uses machine learning, massive data sets, sophisticated sensors, and clever algorithms to master discrete tasks. Examples can be found everywhere: The Google global machine uses AI to interpret cryptic human queries. Credit card companies use it to track fraud. Netflix uses it to recommend movies to subscribers. And the financial system uses it to handle billions of trades (with only the occasional meltdown).

This explosion is the ironic payoff of the seemingly fruitless decades-long quest to emulate human intelligence. That goal proved so elusive that some scientists lost heart and many others lost funding. People talked of an AI winter—a barren season in which no vision or project could take root or grow. But even as the traditional dream of AI was freezing over, a new one was being born: machines built to accomplish specific tasks in ways that people never could. At first, there were just a few green shoots pushing up through the frosty ground. But now we’re in full bloom. Welcome to AI summer.

TRANSPORTATION

All aboard the algorithm.

Model trains are easy to keep track of. But building a model to run real trains is a complex undertaking. So about two years ago, when Norfolk Southern Railway decided to install a smarter system to handle its sprawling operation, it brought in a team of algorithm geeks from Princeton University.

What they got was the Princeton Locomotive and Shop Management System, or Plasma, which used an algorithmic strategy to analyze Norfolk Southern’s operations. Plasma tracks thousands of variables, predicting the impact of changes in fleet size, maintenance policies, transit time, and other factors on real-world operations. The key breakthrough was making the model mimic the complex behavior of the company’s dispatch center in Atlanta. “Think of the dispatch center as one big, collective brain. How do you get a computer to behave like that?” asks Warren Powell, a professor at Princeton’s Operations Research and Financial Engineering department.

The model that Powell and his team came up with was, in effect, a kind of AI hive mind. Plasma uses a technology known as approximate dynamic programming to examine mountains of historical data. The system then uses its findings to model the dispatch center’s collective human decisionmaking and even suggest improvements.

For now, Plasma is serving just as a tool to help Norfolk Southern decide what its fleet size should be—humans are still in control of dispatching the trains. At least we’re still good for something. 
—Jon Stokes.

Today’s AI bears little resemblance to its initial conception. The field’s trailblazers in the 1950s and ’60s believed success lay in mimicking the logic-based reasoning that human brains were thought to use. In 1957, the AI crowd confidently predicted that machines would soon be able to replicate all kinds of human mental achievements. But that turned out to be wildly unachievable, in part because we still don’t really understand how the brain works, much less how to re-create it.

So during the ’80s, graduate students began to focus on the kinds of skills for which computers were well-suited and found they could build something like intelligence from groups of systems that operated according to their own kind of reasoning. “The big surprise is that intelligence isn’t a unitary thing,” says Danny Hillis, who cofounded Thinking Machines, a company that made massively parallel supercomputers. “What we’ve learned is that it’s all kinds of different behaviors.”

AI researchers began to devise a raft of new techniques that were decidedly not modeled on human intelligence. By using probability-based algorithms to derive meaning from huge amounts of data, researchers discovered that they didn’t need to teach a computer how to accomplish a task; they could just show it what people did and let the machine figure out how to emulate that behavior under similar circumstances. They used genetic algorithms, which comb through randomly generated chunks of code, skim the highest-performing ones, and splice them together to spawn new code. As the process is repeated, the evolved programs become amazingly effective, often comparable to the output of the most experienced coders.

MIT’s Rodney Brooks also took a biologically inspired approach to robotics. His lab programmed six-legged buglike creatures by breaking down insect behavior into a series of simple commands—for instance, “If you run into an obstacle, lift your legs higher.” When the programmers got the rules right, the gizmos could figure out for themselves how to navigate even complicated terrain. (It’s no coincidence that iRobot, the company Brooks cofounded with his MIT students, produced the Roomba autonomous vacuum cleaner, which doesn’t initially know the location of all the objects in a room or the best way to traverse it but knows how to keep itself moving.)

The fruits of the AI revolution are now all around us. Once researchers were freed from the burden of building a whole mind, they could construct a rich bestiary of digital fauna, which few would dispute possess something approaching intelligence. “If you told somebody in 1978, ‘You’re going to have this machine, and you’ll be able to type a few words and instantly get all of the world’s knowledge on that topic,’ they would probably consider that to be AI,” Google cofounder Larry Page says. “That seems routine now, but it’s a really big deal.”

Even formerly mechanical processes like driving a car have become collaborations with AI systems. “At first it was the automatic braking system,” Brooks says. “The person’s foot was saying, I want to brake this much, and the intelligent system in the middle figured when to actually apply the brakes to make that work. Now you’re starting to get automatic parking and lane-changing.” Indeed, Google has been developing and testing cars that drive themselves with only minimal human involvement; by October, they had already covered 140,000 miles of pavement.

In short, we are engaged in a permanent dance with machines, locked in an increasingly dependent embrace. And yet, because the bots’ behavior isn’t based on human thought processes, we are often powerless to explain their actions. Wolfram Alpha, the website created by scientist Stephen Wolfram, can solve many mathematical problems. It also seems to display how those answers are derived. But the logical steps that humans see are completely different from the website’s actual calculations. “It doesn’t do any of that reasoning,” Wolfram says. “Those steps are pure fake. We thought, how can we explain this to one of those humans out there?”

The lesson is that our computers sometimes have to humor us, or they will freak us out. Eric Horvitz—now a top Microsoft researcher and a former president of the Association for the Advancement of Artificial Intelligence—helped build an AI system in the 1980s to aid pathologists in their studies, analyzing each result and suggesting the next test to perform. There was just one problem—it provided the answers too quickly. “We found that people trusted it more if we added a delay loop with a flashing light, as though it were huffing and puffing to come up with an answer,” Horvitz says.

But we must learn to adapt. AI is so crucial to some systems—like the financial infrastructure—that getting rid of it would be a lot harder than simply disconnecting HAL 9000’s modules. “In some sense, you can argue that the science fiction scenario is already starting to happen,” Thinking Machines’ Hillis says. “The computers are in control, and we just live in their world.” Wolfram says this conundrum will intensify as AI takes on new tasks, spinning further out of human comprehension. “Do you regulate an underlying algorithm?” he asks. “That’s crazy, because you can’t foresee in most cases what consequences that algorithm will have.”

In its earlier days, artificial intelligence was weighted with controversy and grave doubt, as humanists feared the ramifications of thinking machines. Now the machines are embedded in our lives, and those fears seem irrelevant. “I used to have fights about it,” Brooks says. “I’ve stopped having fights. I’m just trying to win.”

Senior writer Steven Levy (steven_levy@wired.comwrote about the rise of hacker culture in issue 18.05.

 

Alex Halderman and India’s assault on academic freedom

Alex Halderman and India’s assault on academic freedom: "

Five years ago, not long after the founding of Shtetl-Optimized, I blogged about Alex Halderman: my best friend since seventh grade at Newtown Junior High School, now a famous security researcher and a computer science professor at the University of Michigan, and someone whose exploits seem to be worrying at least one government as much as Julian Assange’s.

In the past, Alex has demonstrated the futility of copy-protection schemes for music CDs, helped force the state of California to change its standards for electronic voting machines, and led a spectacular attack against an Internet voting pilot in Washington DC.  But Alex’s latest project is probably his most important and politically-riskiest yet.  Alex, Hari Prasad of India, and Rop Gonggrijp of the Netherlands demonstrated massive security problems with electronic voting machines in India (which are used by about 400 million people in each election, making them the most widely-used voting system on earth).  As a result of this work, Hari was arrested in his home and jailed by the Indian authorities, who threatened not to release him until he revealed the source of the voting machine that he, Alex, and Rop had analyzed.  After finally being released by a sympathetic judge, Hari flew to the United States, where he received the Electronic Frontier Foundation’s 2010 Pioneer Award.  I had the honor of meeting Hari at MIT during his and Alex’s subsequent US lecture tour.

But the story continues.  Earlier this week, after flying into India to give a talk at the International Conference on Information Systems Security (ICISS’2010) in Gandhinagar, Alex and Rop were detained at the New Delhi airport and threatened with deportation from India.  No explanation was given, even though the story became front-page news in India.  Finally, after refusing to board planes out of New Delhi without being given a reason in writing for their deportation, Alex and Rop were allowed to enter India, but only on the condition that they did so as ‘tourists.’ In particular, they were banned from presenting their research on electronic voting machines, and the relevant conference session was cancelled.

To those in the Indian government responsible for the harassment of Alex Halderman and Rop Gonggrijp and (more seriously) the imprisonment of Hari Prasad: shame on you!  And to Alex, Hari, and Rop: let the well-wishes of this blog be like a small, nerdy wind beneath your wings.

"

 

(Via Shtetl-Optimized.)

Model describes universe with no big bang, no beginning, and no end

Jul 29, Physics/General Physics


(PhysOrg.com) -- By suggesting that mass, time, and length can be converted into one another as the universe evolves, Wun-Yi Shu has proposed a new class of cosmological models that may fit observations of the universe better than the current big bang model. What this means specifically is that the new models might explain the increasing acceleration of the universe without relying on a cosmological constant such as dark energy, as well as solve or eliminate other cosmological dilemmas such as the flatness problem and the horizon problem.

Shu, an associate professor at National Tsing Hua University in Taiwan, explains in a study posted at arXiv.org that the new models emerge from a new perspective of some of the most basic entities: time, space, mass, and length. In his proposal, time and space can be converted into one another, with a varying speed of light as the conversion factor. Mass and length are also interchangeable, with the conversion factor depending on both a varying gravitational “constant” and a varying speed of light (G/c2). Basically, as the  expands, time is converted into space, and mass is converted into length. As the universe contracts, the opposite occurs. 

“We view the speed of light as simply a conversion factor between time and space in spacetime,” Shu writes. “It is simply one of the properties of the spacetime geometry. Since the universe is expanding, we speculate that the conversion factor somehow varies in accordance with the evolution of the universe, hence the speed of light varies with cosmic time.” 

As Shu writes in his paper, the newly proposed models have four distinguishing features: 

• The speed of light and the gravitational “constant” are not constant, but vary with the evolution of the universe. 
• Time has no beginning and no end; i.e., there is neither a  nor a big crunch singularity. 
• The spatial section of the universe is a 3-sphere [a higher-dimensional analogue of a sphere], ruling out the possibility of a flat or hyperboloid geometry. 
• The universe experiences phases of both acceleration and deceleration. 

He tested one of the models against current cosmological observations of Type Ia supernovae that have revealed that the universe appears to be expanding at an accelerating rate. He found that, because acceleration is an inherent part of his model, it fits the redshift data of the observed supernovae quite well. In contrast, the currently accepted big bang model does not fit the data, which has caused scientists to search for other explanations such as  that theoretically makes up 75% of the mass-energy of the universe. 

Shu’s models may also account for other problems faced by the standard big bang model. For instance, the flatness problem arises in the big bang model from the observation that a seemingly flat universe such as ours requires finely tuned initial conditions. But because the universe is a 3-sphere in Shu’s models, the flatness problem “disappears automatically.” Similarly, the horizon problem occurs in standard cosmology because it should not be possible for distant places in the universe to share the same physical properties (as they do), since it should require communication faster than the  due to their great distances. However, Shu’s models solve this problem due to their lack of big bang origin and intrinsic acceleration. 

“Essentially, this work is a novel theory about how the magnitudes of the three basic physical dimensions, mass, time, and length, are converted into each other, or equivalently, a novel theory about how the geometry of spacetime and the distribution of mass-energy interact,” Shu writes. “The theory resolves problems in cosmology, such as those of the big bang, dark energy, and flatness, in one fell stroke.”

More information: Wun-Yi Shu. "Cosmological Models with No Big Bang." arXiv:1007.1750v1
via: The Physics ArXiv Blog

© 2010 PhysOrg.com

 


[Home]   [Full version]   [RSS feed]   [Forum]