hybrid suv south africa

[title]

randy knaflic: --close. we'll get things kicked off. so first of all, welcome. and to all the americansin the house, having thanksgiving. a few. all right. good. so i'm randy knaflic.

i head up google's recruitingmachine here in emea, and i'm based here in zurich. and i just wanted to-- i get the pleasure and the honorof introducing our guest this evening. and since we have all of youhere, i thought it would be good chance to just spend justa couple minutes because some of you may or may not befamiliar with this search company called google.

so i'll do my best to give youa little education on that. but probably it's better justto tell you little bit about google here in zurich,switzerland. so we've actually been herefor nearly four years now. and started with actuallyone engineer. and we have the host of having,actually, that one engineer in the front row. so if you're interested inseeing how you go from one engineer to a site that's over350 in a very short time, he

can tell you how it'sexpanded quickly. and i can tell you-- i had a little bit more hairwhen this all started. so google's idea was tobasically take the model that worked really well in mountainview, the engineering model that worked there, and basicallyrecognizing that there's this one little fact. that is, there are a lot ofengineers in the world. and guess what?

they don't all sit in mountainview, california. so, believe it or not, theydecided they wanted to bring this model a little bitcloser to the masses. and zurich was the first sitewhere they decided to do that. and since then, we've actuallygrown to having 12 engineering sites throughout emea and anumber of different locations in all the major cities thathave engineering talent. and it's worked really well. and so having the engineeringsites in these locations is

enabling us to do things bothfor the global market and for the local market. so most of you arehere in zurich. and whether you knew this ornot, next time you're on google, check out google mapsand all those really cool things that you can do likesee what time the bus-- the bus outside the hurlimann-areal that comes every-- i think it's, what, four days?

it's a really bad scheduleon that. but you would seethat schedule. and that was done byour engineers here. not not the bus schedule,the letting you know when it will come. we're working on that as well. we'll get back to you on that. so we're continuing to grow. and we'll continue to tap intothe talent that is in not only

switzerland, but beyond. this site is very special, andit's probably no secret that zurich is a greatplace to live. and we're able to track a lotof talent from all over. in fact, we have over 42different countries represented just inthis one location. actually, i should say twolocations, because this is now our second site that all of youwill have an opportunity-- those who opted for a tour willget to see our new site

here in the hurlimann-areal. so that leads us to the nextstep, which is actually we have a returning visitor and ourguest and honorable guest for this evening. and it's fun, because every timevint comes back, he gets to see how-- he comes. there's 30, 40 engineers here. he runs away. comes back, there's300 and some here.

so we're going tokeep doing that. we're hoping now the next timehe comes back, we even have this entire building filled andwe'll continue to grow. so he's a man who needs verylittle introduction. he's actually the real father ofthe internet, despite what some americans thinkabout mr. gore. the work that he did in creatingthe tcp/ip protocols is obviously what has enabledcompanies like google to exist. so when he joined inseptember of 2005, this was a

huge honor for us and a greatwin to have such a visionary on board. and we're very pleasedto have him. and it's with my greatpleasure that i introduce vint cerf. vint cerf: thank youvery, very much. and i really appreciateeveryone's taking time out, especially in the middleof the week, in the evening, to join us.

it's a real pleasure to have youhere on our google campus. i don't know where reneelaplante is right now, but it's her birthday today. renee, where are you? happy birthday. as as for the crack about algore, there's always some nincompoop who brings that up. al gore deserves credit for whathe did as a senator and as vice-president.

he actually helped to passlegislation that enabled nsfnet backbone to grow and topermit commercial traffic to flow on the government-sponsored backbones in the us. had he not done that, it'spretty likely that the commercial sector would not haveseen an opportunity to create a commercial internetthat all of us can enjoy. so he does deserve some creditfor what he's done. i meant to start out--

[speaking german] and that's all the german you'regoing to get tonight. i did spend a very pleasant sixmonths in the area around stuttgart as a young studentat stanford university. well, my purpose tonight is togive you a sense for what's happening to the internet todayand where we think it's headed in the future. and i thought i would also takeadvantage of the time to give you little glimpses of whatit was like in the early

stages of the internet. but first, let me explainsomething about my title. i'm google's chief internetevangelist. it wasn't a title that i asked for. when i joined the company,they said, what title you want? and i suggested archduke. and they said, well, thatdoesn't quite fit in any of our nomenclature.

and they pointed out that theprevious archduke was ferdinand, and he wasassassinated in 1914, and it started world war i. so maybethat's not a title that you want to have. and they suggestedthat, considering what i've been doing for thelast 35 years, that i ought to become the internet evangelistfor google. so i showed up on first day ofwork wearing this outfit that i guess you see overon your right. this is the formal academicrobes of the university of the

balearic islands. and it was the mostecclesiastical outfit that i owned. so i showed up wearingthat on my first day of work at google. and eric schmidt tookthis picture. it's not often you can find anopportunity to wear something looking like that. so i took advantage of thatfor that one day.

well, let me just start outby reminding you of some statistics of the internetover the last 10 years. 10 years ago, i would have beenvery excited to tell you there were 22 million machineson the internet. now, there are almost500 million million. and these are servers, the webservers, the email servers, and the like. it's not the machines that areepisodically connected, like laptops or personal digitalassistants.

the number of users on the nethas grown to 1.2 billion, which sounds like a big numberuntil you realize that there are 6 and 1/2 billionpeople in the world. so the chief internet evangelisthas 5.3 billion people to convertto internet use. so i have a long ways to go. the other thing which has beenhappening in the telecom environment over the last decadehas been the rapid influx of mobiles.

the estimates now are that thereare 3 billion by the end of this year that are in use,2.3 billion accounts, and 3 billion mobiles, which meansthere are 700 million people with more than one mobile. what's important to us andothers in the internet environment is that many peoplewill have their first introduction to the internetthrough a mobile and not through a laptop or a desktop. there are estimated to be about10% of all the mobiles

in use that are internetenabled. and so as time goes on and moreand more of these devices become part of the landscape, anincreasing number of people in the world will have theirfirst introduction to the internet by way of a mobile asopposed to other instruments. if we look at the distributionof users on the network, the first thing that strikes me,anyway, is that 10 years ago, north america would have beenthe largest absolute population of internet users.

but today, it's asia, whichincludes china, and india, and indonesia, and malaysia,japan, and so on. but interestingly enough, thislarge number, 460 million people, represent only 12% ofthe population in that region. so as they reach the samepenetrations as we have, for example, in europe, at 42%, theabsolute numbers of asian users will increase. this tells you something aboutwhat to expect in terms of the content of the internet.

the kinds of interests thatpeople will have, the culture and style of use of the netwill all be strongly influenced by our colleaguesliving in that region. europe has almost 338 millionusers, with a penetration of about 42%. i've given up trying to make anypredictions at all about europe because you keepre-defining yourselves by adding countries to. so whatever happens isgoing to happen.

africa is at the bottom of thelist here in terms of the percentage penetration. there are a billion peopleliving in africa, but very few of them, 40 million of them,have access to the internet. it's a big challenge there. the telecom infrastructuresare still fairly immature. the economies very prettydramatically. and so getting them up andrunning on the internet is an important task, and one whichis a significant challenge.

i thought you'd find it amusingto go way back into history see the beginningsof the predecessor to the internet, the arpanet. there was a four node systemthat was set up in december of 1969. and i was fortunate enough to bea graduate student at ucla in september of 1969. i programmed the software thatconnected a sigma seven computer up to the firstnode of the arpanet.

the sigma seven isnow in a museum. and some people think ishould be there too. but this was the beginning ofwide area packet switching. and it was a grand experimentto see whether or not that technology would actuallysupport rapid fire interactions among timeshared machines. and indeed, it worked out. the packet switch wascalled an imp, an interface message processor.

this is what it looked like. it was delivered by bolt,baranek, and newman, the company in cambridge,massachusetts, in a very, very heavy-duty metal box. they knew that this wasa military contract. and they didn't know whetherthey would be dropping these things out of helicoptersor what. so they put it in a very, veryheavy duty container. considering it was installed atucla surrounded by graduate

and undergraduate students,probably this heavy-duty container was exactly the rightthing, even if it never deployed into any other place. this picture was actuallytaken in 1994. it was the 25th anniversaryof the arpanet. the guy on the far left is jonpostel, who all by himself managed the rfc series as theeditor, managed the allocation of ip address space, and managedthe delegation of top level domains in the domain namesystem for over 25 years.

you can imagine, though, by thetime 1996 rolled around, the beginnings of the dot boomhave happened, netscape communications has done its ipo,the general public has discovered internet, and we'reoff and running, jon realised that that function that heperformed needed to be institutionalized. and so he began a process oftrying to figure out how to create an organization thatwould perform these functions. in the end, an organizationcalled icann, the internet

corporation for assigned namesand numbers, was created in 1998 to take over theresponsibilities of domain name management, internetaddress allocation, and the maintenance of all the protocoltables associated with the internet protocols. sadly, jon postel passed awaytwo weeks before icann was actually formally created. but it has now persistedsince 1998. i had the honor of serving aschairman of the board of icann

until just abouttwo weeks ago. and i managed to escape becausemy sentence was up. there are term limitsin the bylaws. and it said i couldn't serveanymore on the board. and, frankly, i was pleasedto turn this over to my successor, a man named peterdengate thrush, who is from new zealand. so the point i want to makehere, apart from the amusing diagram of 1994, is that theinternet it is now and has

always had a certaininternational character to it. the guy in the middleis steve crocker. he was the man who really blazedtrails in computer network protocols. he ran what was called thenetwork working group. which was a collection ofgraduate students in various in computer science departmentsaround the united states, developing the firsthost to host protocols. it was called ncp, networkcommunications program.

and steve was the primaryleader that developed that protocol. it was used until 1983, whenthe tcp/ip protocols were introduced to a multi-networksystem. you'll notice that we tried todemonstrate in this picture for newsweek how primitivecomputer communications was in the 1960s. it took us almost eight hoursto set up this shot. and we drew all these pictureson foolscap paper of the

clouds of networks. and then we had to buy somezucchinis and yellow squash and string them together. you'll notice that this networknever would work because it was either ear toear or mouth to mouth. but there was no mouth to ear. we posed it this way on purpose,hoping there would be a few geek readers of newsweekwho would get the joke. this is what i looked like whenthe tcp/ip protocols were

being developed. i demonstrated-- not tcp/ip but the arpanet fromsouth africa in 1974. this was actually an interestingexperience because we brought an acoustic couplerwith us into south africa. it was the first time that thesouth african telecom company had ever allowed a foreignobject to be connected to their telephone system. and they were very concernedthat this acoustic coupler

might do damage totheir network. so we managed to persuade themthat it would be ok. and we connected the terminalsin south africa to the arpanet by way of a satellite link allthe way back to new york at the blazing speed of 300bits per second. well, the internet got startedin part because bob kahn, the other half of the basic designof the internet, told me that in the defense department, hewas looking at how to do computers in commandand control.

and if you were really seriousabout putting computers where the military needed to be, youhad to have computers running in mechanized infantry vehicles,and in tanks, and other things. and you couldn't pull wiresbehind them, because the tanks would run over the wiresand break them. so you needed radio for that. and you also needed to haveships at sea communicating with each other.

and since they couldn't pullcables behind them, because they'd get tangled in knots,instead you needed to have satellite communication forwide area linkages. and, of course, we neededwireline communications for fixed installations. so there were three networksthat were part of the internet development. one was called the packetradio net for mobile ground radio.

another was packet satelliteusing intelsat 4a across the atlantic to permit multipleground stations to compete for access to a sharedcommunications channel. and then the arpanet, thepredecessor, which was based on wirelines. sri international ran the packetradio test bed in the san francisco bay areaduring the 1970s. and they built this nondescriptpanel van as a way of testing packet radio bydriving up and down the

bayshore freeway andoccasionally stopping to make detailed measurements of packetloss, signal to noise ratio, the effects of shot noisefrom cars running back and forth nearby the van. the story goes that one daythey'd pulled off to the side of the road, and the driver, whowas another engineer, got out from the cab, and wentaround, and got into the back of the van. they were making a bunchof measurements.

and some police car pulled upand noticed that there was nobody in the cab. so he went around andknocked on the door. and, of course, theyopened up the door. and this policeman looks in. and he sees a bunch of hairy,geeky people with computers, and displays, and radios,and everything else. and he says, who are you? and somebody says, oh, wework for the government.

and he looks at him, and hesays, which government? but officer, we were only going50 kilobits per second. well, i remember in 1979, afterwe demonstrated that this technology actually worked,that we wanted to convince the us army that theyought to seriously try these out in field exercises. so i had a bunch of guys fromfort bragg 18th airborne corps coming out to drive around onthe bayshore to actually see how it worked.

and later, we actually deployedpacket radios at fort bragg for field testing. they were big one cubicfoot devices that cost $50,000 each. they ran at 100 kilobits and400 kilobits a second. they used spread-spectrumcommunications. now, this is pretty advancedconsidering it's 1975. and, of course, the physicalsize of the radio tells you something about the nature ofthe electronics that were

available at the timeto implement this. this is a view of theinside of that van. something else that was veryinteresting in all of this is that some of you will befamiliar with voice over ip. maybe many of you are usinggoogle talk, or skype, or one of the other applications,ichat. we were testing packetizedspeech in the mid-1970s. so, in fact, this is not sucha new thing, after all. in the case of the packet radionetwork and the arpanet,

we were trying to put apacketized speech over what turned out to be a 50 kilobitbackbone in the arpanet. and all of you know that whenyou digitize speech, normally it's a 64 kilobit stream. so cramming 64 kilobits persecond into a 50 kilobit channel is a littlebit difficult. and in fact, we wanted to carrymore than one voice stream in this backbone. so we compressed the voice downto 1,800 bits per second

using what was called linearpredictive code with 10 parameters. all that means is it the voicetrack was modeled as a stack of 10 cylinders whose diameterwas changing as the voice would speak. and this stack was excitedby a formant frequency. you would send only thediameters of each of the cylinders plus the formantfrequency to the other side. you'd do an inverse calculationto produce sound

and hope that somehow itwould be intelligible on the other end. speaking of intelligible, i amstruggling right now with a-- it's vodka. this should be very interestingby the end of it. so part of the problem with thiscompression ratio, going from 64 kilobits down to 1,800bits per second, is that you lose a certain amountof quality of voice. and so when you spoke throughthis system, it basically made

everyone sound like adrunken norwegian. the day came when i hadto-- sorry about this. wow. the day came when i had todemonstrate this to a bunch of generals in the pentagon. and i got to thinking, howam i going to do this? and then i remembered thatone of the guys that was participating in this experimentwas from the norwegian defense researchestablishment.

his name was his namewas yngvar lundh. and so we got the idea that we'dhave him speak through the ordinary telephone system. then we'd have himspeak through the packet radio system. and it sounded exactlythe same. so we didn't tell the generalsthat everyone would sound that way if they went throughthe system. today is actually a veryimportant milestone.

today is the 30th anniversary ofthe first demonstration of getting all three of theoriginal networks of the internet to interconnect andcommunicate with each other. we took-- the packet radio vanwas driving up and down the bayshore freeway radiatingpackets. they were intended to bedelivered to usc information sciences institute in marina delrey, california, which is just to the westof los angeles. but we jiggered the gatewaysso that the routing would

actually go from the packetradio net, through the arpanet, through an internalsatellite hop down to kjeller, norway, then down by landline touniversity college london, then out of the arpanet throughanother gateway, up through a satellite groundstation at goonhilly downs. and then up through the intelsat4 satellite, then down to [? ekham ?], westvirginia to another satellite ground station, through anothergateway, and back into the arpanet, and then allthe way down to usc isi.

so as the crow flies, thepackets were only going 400 hundred miles, from sanfrancisco down to los angeles. but if you actually measuredwhere the packet when, it went over 88,000 miles. because it went through twosatellite hops up and down, and then across the atlanticocean twice, and across the united states. so we were all pretty excitedabout the fact that it actually worked.

i don't know about you, but i'vebeen in the software game for a very long time. software never works. i mean, it's just a miraclewhatever it works. so we were leaping up and down,screaming, it works, it works, as if it couldn'tpossibly have worked. so we celebrated this particularanniversary a couple of weeks ago atsri international. we invited everybody who hadbeen involved in this

particular demonstration. and quite a few people wereable to come back. and we got to renew ourold acquaintances. but that was a very importantmilestone today, 30 years ago. of course, if you look at theinternet in 1999, ninety even today, this is the sortof thing you see. highly connected, much larger,more colorful. and that's about as much as youcan say about the internet that it's a pretty accuratedescription.

it got a heck of a lot biggerover the 30-year period. some of the things that havemade the internet successful were fundamental decisions thatbob kahn and i and others made at the beginningof this thing. one thing that we knew is thatwe didn't know what new switching and transmissiontechnologies would be invented after we had settled on thedesign of the internet. and we did not want thenet to be outmoded. we wanted to be future-proof.

so we said we don't want theinternet layer protocol to be very aware of or dependent uponwhich technology was used to move packets from onepoint to another. we were fond of observing thatall we needed from the underlying transmission systemis the ability to deliver a bag of bits from point a topoint b with some probability greater than zero. that's all we asked. everything else was done on anend to end basis using things

tcp or udp in order to recoverfrom failures, or to retransmit, or to weedout duplicates. so we were, i think, verywell-served by making that particular philosophicaldecision. but there was something elsethat also derived from that decision that i didn't fullyappreciate until later. the packets not only don'tcare how they are being carried, but they don't knowwhat they're carrying. they're basically ignorant ofanything except that they're

carrying a bag of bits. the interpretation of what's inthe packets occurs at the edges of the net, in thecomputers that are transmitting and receivingthe data. the consequence of this end toend principle has been that people can introduce newapplications in the internet environment without having tochange the underlying networks and without having to getpermission from the internet service providers to tryout their new ideas.

so when larry and sergey startedgoogle, they didn't have to get permission froman isp in order to try this idea out. or when jeff bezos did amazon,or when david filo and jerry yang set up yahoo! they simply did it. and the same is truemore recently when skype was created. nobody had to give anypermission to anyone.

same is true for bittorrentand many of the other peer-to-peer applications. essentially, you get to do whatyou want to do because the underlying network, is atleast up until now, very neutral about theapplications. so this end to end principle hasbeen an important engine for innovation in the internetin the past. google and others believe that it should continueto be an engine of innovation.

and it can only do that if theinternet service providers essentially keep their hands offthe applications and just simply carry bits from point ato point b. this doesn't mean that an isp can't also havevalue added applications. that's not what this means. it just means that the providerof the underlying transmission, in many casesbroadband transmission, that provider should not takeadvantage of carrying the underlying transmission tointerfere with other parties

competing at higher layers inthe protocol for applications that might be of interestto the consumers. similarly, the consumers, whobelieve that they're buying access to the full interneteverywhere in the world when they acquire broadband accessto the net, have a reason to expect that no matter where theyaim their packets, that the underlying system willcarry them there in a nondiscriminatory way. this does not mean, for example,that you must treat

every single packet on theinternet precisely in identically in the same way. we all understand theneed for control traffic with high priority. we understand the possibilitythat some traffic needs low latency. we understand that you may wantto charge more for higher capacity at the edgesof the net. net neutrality does not meanthat everything is

precisely the same. but what it does mean is thatthere is no discrimination with regard to whose servicesthe consumer is trying to get to or who is offering thoseservices when it comes to traversing the broadbandchannels that the isps are providing. one other thing about broadbandthat's turning out to be a problem is that, atleast in the united states, it's an asymmetric service, asyou often can download things

faster than you can upload them,leading to anomalies like you can receivehigh-quality video, but you can't generate it. my general belief is that theconsumers are going to be unsatisfied with this asymmetryand that there will be pressure to provide foruniform and symmetric broadband capacity, which iswhat you can get in other parts of the world. in kyoto, you can get a gigabitper second access to

the internet. it's full duplex, and it costs8,700 yen a month. it almost made me want to moveto kyoto, because it just seemed like such a very friendlyenvironment to try new things out. so we're very concerned aboutthe symmetry and neutrality of the underlying network. the other thing i wanted topoint out is that this chart, which was prepared by geoffhuston, who's an engineer in

australia, is intended toillustrate the utilization of the ip version 4address space. the important part of thischurch is the one that's trending downward. that part is saying basicallythese are the address blocks that the internet assignednumbers authority is allocating to the regionalinternet registries and that it will run out of ipv4 addressblocks somewhere around the middle of 2010.

the regional internetregistries-- yours here in europeis the ripe ncc-- will presumably handout portions of those address blocks. they're likely to use thoseup by the middle of 2011. the implication of this isthere won't be any more available ipv4 address space. this doesn't mean that thenetwork will come to a grinding halt.

but what it does mean is therewon't be any more ipv4 address space and that the onlyaddresses that will be available for further expansion will be ipv6 addresses. now, i have to admit thati'm personally the cause of this problem. around 1977, there had been ayear's worth of debate among the various engineers workingon the internet design about how big the address space shouldbe for this experiment.

and one group argued for 32bits, another for 128 bits, and another for variablelength. well, the variable length guysgot killed off right away because the programmers saidthey didn't want variable length headers because it washard to find all fields and you had to add extracycles to do it. and it's hard enoughto get throughput anyway, so don't do that. the 128 bit guys were saying,we're going to need a lot of

address space. and the others guys weresaying, wait a minute. this is an experiment. 32 bits gives you 4.3 billionterminations. and how many terminations do youneed to do an experiment? even the defense departmentwasn't going to buy 4.3 billion of anything in orderto demonstrate this technology. so they couldn't makeup their minds.

and i was the program manager atthe time spending money on getting this thing going. and finally, i said, ok, youguys can't make up your mind. it's 32 bits. that's it. we're done. let's go on. well, if i could redo it, ofcourse, i'd go back and say, let's do 128.

but at the time, it wouldhave been silly. we were using full duplexechoplex kinds of interactions with timeshared machinesand local terminals across the network. and you can imagine sending onecharacter with 256 bits of overhead just for theaddressing, it would have been silly. so we ended up with a 32bit address space. i thought we would demonstratethe capability of the internet

and that we would be convinced,if it worked, that we should then re-engineerfor production. well, we never got tore-engineer it. it just kept growing. so here we are. we're running out. and we have to usethe new ipv6. by the way, if you're counting,and you wonder, ok, ipv4, ipv6, what happenedto ipv5?

the answer is it was anexperiment in a different packet format for streamingaudio and video. and it led to a cul de sacand we abandoned it. and the next availableprotocol id was six. so that's why we have ipv6. now, with 128 bits of addressspace, you can have up to 3.4 times 10 the 38th uniqueterminations. i used to go around saying thatmeans that every electron in the universe can have itsown web page if it wants to

until i got an email fromsomebody at caltech: dear dr. cerf, you jerk, there's 10 tothe 88th electrons in the universe, and you're off by50 orders of magnitude. so i don't say that anymore. but it is enough address spaceto last until after i'm dead, and then it's somebodyelse's problem. i won't have time to go throughvery many of these, but i wanted to just emphasizethat despite the fact that the internet has been around forsome time now, certainly from

the conceptual point of view for35 years, there are still a whole bunch of researchproblems that have not been solved. let me just pick on a couple ofthem that i consider to be major issues. security is clearlya huge issue. we have browsers that areeasily penetrated they download bad java code andturn their machines into zombies which becomepart of bot nets.

you have problems of denialservice attacks. you have the ability to actuallyabuse the domain name system and turn some of itscomponents into amplifiers denial of service attacks, whichadds insult to injury. multihoming, we haven't donea very good job of that in either v4 or v6. multipath routing, we usuallypick the best path we can, but it's only one path. if there were multiple pathsbetween a source and

destination, if we could runtraffic on both of them, we'd get higher capacity. we don't do that. we don't use broadcast mediaat all well in the current internet architecture. when you think about it, we turnbroadcast channels into point to point links. and it's a terrible waste ifyour intent is to deliver the same thing to a largenumber of receivers.

and that could very well turnout to be a useful capability, not just for delivering thingslike video or audio to a large number of recipients,but software. people want to download a particular a piece of software. if enough people wanted the samething, you could schedule a transmission over a broadcastchannel that would allow everyone to receive itefficiently, perhaps by satellite, or over a coaxialcable, or over a cable

television network. so we haven't done any ofthose things very well. and we don't have a lot ofexperience with ipv6. and what's even more important,we don't have a lot of experience running two ipprotocols at the same time in the same network, whichis what we are going to have to do. so in order to transition, wecan't simply throw a switch and say, tomorrow we'reusing ipv6 only.

we're going to have to spendyears running both v4 and v6 at the same time. and we don't have a lot ofexperience with that. when you do two things insteadof one thing, you get more possible complications. the network management systemsmay not know what to do when it gets errors fromboth v4 and v6. or worse, it gets errors fromv6 but not from v4. the routing is working for one,but not for the other.

do i reboot the router or not? what do i do? so there are a wide range ofissues, including a very fundamental problem with ipv6. we don't have a fully connectedipv6 network. what we have is islandsof ipv6. this is not the circumstancewe had with the original internet. every time we added anothernetwork, it was v5, and it

connected to an alreadyconnected network. but when we start putting inv6 and not implementing it uniformly everywhere, then we'regoing to have islands. they could be connectedby tunnels through v4. it's a very awkwardproposition. until we have assurance that wehave a fully connected ipv6 network, people are going to bedoing domain name lookups, getting ipv6 addresses, tryingto get to them, and not getting there because you're ina part of the internet the

doesn't connect to the otherparts of the internet that are running ipv6. so these are just headachesthat are going to hit. and they're going to starthitting in 2008 because of us are going to have to startgetting v6 six into operation before we actually run outof ipv4 four addresses. switching to a slightlydifferent view of the net, there have been some reallysurprising social and economic effects on the net that arebecoming more visible.

the one that i find the mostdramatic is that the information consumersare now becoming the information producers. so you can see it in the formof blogging or youtube or google video uploads, personalweb pages, and other things. people are pushing informationinto the network as well as pulling it out. this is unlike any broadcast ormass medium in the past. in the past, a mass medium had asmall number of information

producers and a verylarge number of information consumers. the internet inverts all of thatand allows the consumers also to produce content. wikipedia has taught us anothervery interesting thing about this internetenvironment. i want you to think about aparagraph in wikipedia that you're reading. and you see one word whichshould be changed because

you're an expert in the areaand you know that the statement is wrong orthe sense of the paragraph is wrong. you could certainly makethat one word change. you would never publish a oneword scholarly paper. you wouldn't publisha one word book. but you can publish oneword of a change in wikipedia paragraph. and it's useful.

it's a contributionto everyone who looks at that paragraph. so the internet will absorb theone word change, the one page change, one paper, onebook, one movie, one video. it is willing to absorbinformation at all scales and in all formats, as long asthey can be digitized. so the barrier to contributioninto the internet environment is essentially zero. another phenomenon which israpidly evolving is social

networking. many of you may already be usinglinkedin or myspace or facebook or orkut orsome of the others. that's a phenomenon that's goingto continue to grow. especially young people enjoyinteracting with each other in this new medium. and they show a considerableamount of creativity in inventing new ways ofinteracting with each other. similarly, game playing.

second life, world of warcraft, and a bunch of others. everquest is another one. what's interesting about theseparticular environments is really twofold. one of them is that there arereal people making decisions in these games. and some economists at harvard,for example, have asked their students to gobecome participants in second

life in order to observe thekinds of economic decisions that people are making in thecontext of these games, because they're actually tryingout different economic principles within various partsof the game environment. and so it's actually in anexperiment that you couldn't necessarily conduct in thereal world that's being conducted in this artificialenvironment. the other important observationi would make is that the economics of digitalinformation are dramatically

different from the economicsof paper or other physical media. just to emphasize this, let megive you a little story. i bought two terabytes of diskmemory a few months ago for about $600 for use at home. and i remembered buying a 10megabyte disk drive in 1979 for $1,000. and i got to thinking, whatwould have happened if i'd tried to buy a terabyteof memory in 1979?

and when you do the math,it would have cost me $100 million. i didn't have $100million in 1979. and to be honest withyou, i don't have $100 million now either. but if i'd had $100 million in1979, i'm pretty sure my wife wouldn't let me buy $100 millionworth of disk drives. she would have had a betterthing to do with it. the point i want to make,though, is that that's a very

dramatic drop in the costof disk storage. you're seeing similar kinds ofdrops in the cost of moving bits and processing bitselectronically. the business models that youcan build around those economics are very differentfrom the business models that were built around other media,paper or other physical media. and companies that built theirbusinesses around the older economics are going to have tolearn to adapt to the new economics of online, realtime, digital processing

transmission and storage. and if they don't figure out howto adapt to that, they'll be subject to darwinianprinciples. this is a very simple principle,adapt or die. and so if you don't figure outhow to adapt, the other choice is the only one you have. so anumber of companies are going to be, i would say, challengedto understand that the economics of digital informationare really demanding them to rethinktheir business models.

this was a chart thatwas generated by a company called sandvine. they're doing some deep packetinspection to understand the behavior of users at theedge of the network on a particular channel. what they were looking at hereis a variety of applications that are visible as the packetsare traversing back and forth over access lines. what was important here isthat the youtube traffic

represented somewhere between 5%and 10% of all the traffic that they measured on thisparticular access channel. and i bring this up primarilyto say that youtube is only two years old. so just look at what happenedwith an application very recently suddenly blossominginto a fairly highly demanding application in terms of capacityon the network. we can easily imagine that otherapplications will be invented that may have differentprofiles of demand

for traffic, either uploadingor downloading. or maybe low latency, ormany other things. so the point here is that thenetwork is very dynamic. it is constantly changing. new applications are comingalong, making new demands on its capacity. and so this is not stable inthe same sense that the telephone network was stable,where you could use erlang formulas to predict how manylines you needed to keep

people from getting a busysignal, below 1% probability. the internet does not havestable statistics like that. and because new applicationscan be invented simply by writing a new piece of software,i think we're not ever going to be able to predictvery well the actual behavior of the netat the edge. in the core of the net, it'sa different story, because you're aggregating a largenumber of flows. and you can get fairly stablestatistics for the core of the

net, but not at the edge. i've been thinking a lot abouthow the economics of digital storage and transmission have aneffect on certain kinds of media, like videoin particular. let's just take a moment acouple of observations. 15% percent of all the videothat people watch is real time video. it's being producedin real time. it's a news program.

it's an emergency or maybea sporting event. 85% five percent of video thatpeople watch is actually pre-recorded material. so in this chart, there aretwo kinds of video, rt for real time video that's beengenerated in real time. and pr for pre-recorded videothat's being transmitted through the network. and imagine now thatwe've got two axes. one is the transmissionrate that's available

to you as the consumer. and the other is the storageyou have available locally. and the split is that hightransmission rate means it's sufficiently high to deliverthings in real time and low means you can't deliverit in real time. low storage means there isn'tenough memory locally to store any reasonable amountof video. high means there's enoughstorage to store reasonable amounts, which might be measuredin hours of video.

so the question is,which quadrant are you in as the consumer? if you're in the lower left handquadrant, where you can't transmit in real time, and youdon't have any place to store it, you're basicallyout of luck. video is not an interestingmedium for you. if you have very hightransmission rates and no storage available, you caneasily receive the streams in real time, just as you wouldover a typical cable or

satellite or over the airtransmission system. you can even potentially receivethe pre-recorded material at higher thanreal time speeds. but since you don't have anyplace to store them, it doesn't do you a lot of good. so basically, in the upperleft, you're stuck with streaming video in real time. in the upper right, it'smuch more interesting. because here you have high speedavailable and you have a

lot of storage available. the real time stream couldbe delivered and watched in real time. it could be delivered in realtime and stored away and watched later, justlike tivo or other personal video recorders. but it's the pre-recordedstuff that gets really interesting. you could clearly transmitit in real time.

but you can also transmit itfaster than real time because you have a data rate thatexceeds the rate at which the video is normally transmittedfor viewing. the implication of this is thatvideo on demand no longer means streaming video. it means delivering videopotentially faster than you could watch it. anybody that uses an ipod todayis experiencing that. because you're downloading musicfaster than you could

listen to it. and then you play it backat your leisure whenever you want to. it's my belief that iptv isgoing to be the download and playback style of ipod as longas the data rates at the edges of the net are sufficientlyhigh. so what does that mean forthe television industry? and i'd like to use the wordtelevision here to refer to a business model and the wordvideo to refer to the medium.

and so my interest here isunderstanding what happens to the video medium and thebusiness of video when it ends up into an internetenvironment. one thing that's very clear isthat because you packetized everything when you'redownloading, the data that you're downloading doesn'thave to be confined to video and audio. it could easily containother information. so when you get a dvd, it hasbonus components on it.

it's bonus videos. it's textual material. maybe it's the biographies ofthe actors, or the story of how the movie was made,or the book that the movie was based on. so when we're downloading stuffassociated with video and it's coming through theinternet, we can download all forms of digitized content,store it away, and then access it later.

among the things thatcould be downloaded is advertising material. in the conventional video worldyou, interrupt the video for an an advertisement, and youforce it on the users, on the consumers. in the world of internet basedsystems, when you're playing back the recorded content,there's a program, a computer, which is interpretingthe traffic and interpreting the data.

and so it's not just a stupidraster scan device. it can actually make decisionsbased on the kind of information that'sbeing pulled up. so imagine that you've composedan entertainment video and that you've made someof the objects in the field of view-- like maybe this macintosh issitting in the field of view-- you've made those objectsdetectable or sensitive to mousing.

so if you mouse over thatparticular object, it highlights. and a window pops open. it says, gee, i see you'relooking at the macintosh. let me tell you a little bitmore about that product. click here if you'd like to findout whether there are any available at the apple store. by the way, do you want tocomplete the transaction now? and then go back to watchingthe movie.

the idea of allowing the usersto mouse around in the field of view of an entertainmentvideo it is a transforming idea with regardto advertising. and it feels a little funny, acomputer programmer like me sitting up here, getting excitedabout advertising. but remember, that's wheregoogle makes all its revenue. so we care a lot about newstyles of advertising that would improve the consumers'control over what advertising he or she has tobe exposed to.

and also, it turns out theadvertisers care a lot about knowing whether the users areinterested in their products. and so knowing thatnobody is-- if they're not interested,they don't click. if they are, they do. and that is a big jump up inunderstanding something about the potential client for yourproducts or your services. so my prediction is that videoin the internet environment, where high speed interfacesare available and lots of

storage are available, will be atransforming opportunity for users to control advertising andfor advertisers to wind up with a much better productthan they have today. i mentioned mobiles before. and i just want to emphasizethat these are programmable devices. these are not just telephonesanymore. google recently announced anoperating system called android that we would like tomake available to anyone who's

building these wirelessplatforms. | purpose is to open up the platform so thatyou can download new applications and allow users totry out new things without too much difficulty. these things are alreadyuseful for accessing information on the net. here, especially, in europe,they are being used to make payments. this is a challenge, though.

i've got a blackberry here, andit has a screen that's the size of a 1928 television set. and the data rates that you canreach this thing with vary from tens of kilobits a secondto maybe as much as a megabit. and the keyboard is justgreat for anybody who's three inches tall. so these are prettylimiting devices. but it seems to me that they aregoing to be very important because of the prevalence ofthese devices, especially in

areas where alternativeaccess to the internet isn't available. what is interesting about theseis that because you carry them around on your personor in your purse, they become your informationaccess method, your information source. and often, you want informationwhich is relevant to where you areat the moment. so geographically indexedinformation is becoming very,

very valuable and veryimportant, especially as you access it through mobiles. i have a small anecdote to sharewhich emphasized for me the importance of having accessto geographically indexed information. my family and i went on avacation this may in a place called page, arizona. it's adjacent to somethingcalled lake powell. we decided to renta houseboat.

believe me, if you like steeringthings around, don't rent a houseboat. it steers like a houseboat. anyway, i was terrified that iwas just going to ricochet my way down the lake. but in any case, the problem isonce you get on the boat, there's no place toget any food. so you have to prepareby buying food and bringing it onboard.

so as we were driving intolake powell, we were discussing what meals wewere going to produce. and somebody said, well,i want to make paella. and i thought, well,that's interesting. you need saffron to do that. where am i going to find saffronin this little town of page, arizona? so fortunately, i gota good gprs signal. so i pulled out theblackberry.

and i went to google. and i said, page, arizonagrocery store saffron. and up popped a response withan address, a telephone number, name of the store, anda little map to show you how to get there. so i clicked on thetelephone number. and, of course, this being atelephone, it made the call. the phone rang. somebody answered.

and i said, could i please speakto the spice department? now, this is a little store. so it's probably the ownerthat, this is the spice department. and i said, do youhave any saffron? he says, i don't know. he went off. and he came back. he says, yeah, i've got some.

so we followed the map. this all happeningin real time. we follow the map, driveinto the parking lot. and i ran in and bought $12.99worth of saffron. that's 0.06 ounces, incase you wondered. and we went off onlake powell. we made a really nice paella. what really struck me is that iwas able to get information that was relevant to a specificneed in real time and

execute this transaction. it would not have worked-- can you imagine going and tryingto find something in the white pages or the yellowpages at a gas station or what have you? the fact that you can getinformation is useful to you in real time is reallyquite striking. so i believe that as this mobilerevolution continues to unfold, that geographicallyindexed information is going

to be extremely valuable. well, some of you have beenaround for a while and have watched the internet grow. i've been a little stunned atsome of the devices that are starting to show up on thenetwork, like internet enabled refrigerators or picture framesthat download images off of web sites and thencycle through them automatically, or things thatlook like telephones, but they're actually voiceover ip computers.

but the guy that really stunnedme is the fellow in the middle. he's from san diego. he made an internetenabled surfboard. i guess he was sitting out onthe water thinking, you know, if i had a laptop in mysurfboard, i could be surfing the internet while i'm waitingto surf the pacific ocean. so he built a laptop intohis surf board. and he put a wifi servicein the rescue

shack back on the beach. and he now sells thisis a product. so if you're interested inbuying internet enabled surfboard, he's theguy to go to. i honestly think that there aregoing to be billions of devices on the net, more devicesthan there are people. and if you think about thenumber of appliances that serve you every day, thereare lots of them. and imagine that they'reall online.

imagine being able to interactwith them or use intermediary services to interactwith them. so as an example, instead ofhaving to interact directly with your entertainment systems,if they were up on the network and accessible thatway, you might interact through a web page on a serviceon the network, which then turns around and takescare of downloading movies that you want to watch or musicthat you want to listen to or moving content fromone place to another.

all of that could be donethrough the internet. in fact, a lot of those deviceshave remote controls. and if you're like me, thereare lots of them. and then you fumble aroundtrying to figure out which remote control goeswith which box. and after you figure that out,that's the remote control with the dead battery. so the idea here is to replaceall those with your mobile, which is internet enabled.

so are the devicesin the room. you program them andinteract with them through the internet. so you don't even have tobe in the same room. gee, you don't even haveto be in the house. you could be anywhere in theworld where you could get access to the internet, andyou could control your entertainment systems. of course, so could the15-year-old next door.

and so you clearly need strongauthentication in order to make sure only the authorizedusers of these systems are controlling your entertainmentsystem, or your heating and ventilation, or your security. all of these things couldeasily be online-- lots of appliances at home,appliances in the office, all manageable through theinternet and offering opportunities for third partiesto help manage some of that equipment for you.

so this kind of the networkof things is creating opportunities for peopleto offer new products and services. i don't have time togo through all of these various examples. but there are little scenariosyou can cook up, like in the refrigerator that's online. if your families are likeamerican families, the communication medium betweenamerican families is generally

paper and magnets on the frontof the refrigerator. and now, if you put up a nicelaptop interface on the front of the refrigerator door, youcan communicate with family members by blogging, by instantmessaging, and by web pages and email. but it gets more interestingif you imagine that the refrigerator has an rfiddetector inside. and rfid chips are on theproducts that you put inside the refrigerator.

so now the refrigerator canknow what it has inside. and while you're at work, it'ssurfing the internet looking for recipes that it knowsit could make with what it has inside. so when you get home, you see anice list of things to have for dinner if you like. and you can extrapolate this. you might be on vacation,and you get an email. it's from your refrigerator.

it says, i don't know how muchyogurt is left, but you put it in there three weeks ago,and it's going to crawl out on its own. or maybe your mobile goes off. it's an sms from yourrefrigerator. don't forget themarinara sauce. i have everything else i needfor spaghetti dinner tonight. now, unfortunately, the japanesehave spoiled this beautiful scenario.

they've invented an internetenabled bathroom scale. when you step on the scale, itfigures out which family member you are basedon your weight. and it sends that informationto the doctor to become part of your medical record. and, of course, that's ok,except for one problem. the refrigerator's onthe same network. so when you come home,you see diet recipes coming up on the display.

or maybe it just refusesto open because it knows you're a diet. i'm going to skip over--oh, wait a minute. i'm sorry. there's some importantstuff here. i mentioned earlier ipv6. p and that's something thatreally is a major transformation of the core partof the internet at the internet protocol layer.

there are other things that arehappening in 2007 and now 2008 that are going to have animpact on all of us who offer various kinds of internetservice. one thing is the introductionof non-latin top level domains, internationalizeddomain names, that are written in character sets that includethings like arabic, and cyrillic, and hebrew, andchinese of various kinds, and kanji, and hangul,and so forth. icann has already put up 11test languages in the top

level domains, in theroot zone file. and it's encouraging peopleto go there and to try out interactions with those domainnames using various application software packages,including browsers and also email to give you a chance tosee how the software will interact with these non-latincharacter domain names. they're are representedtypically in unicode. it may be unicode encodedin utf-8, for example. but what's important is that thesoftware has to recognize

that these are domain names,even though they're expressed using strings other than simplya through z and zero through nine and a hyphen. the other thing which is goingon is that digital signing of domain name entries. so dnssec is a way of allowingsomeone who's doing the domain name lookup to ask for adigitally signed answer. and when that comes back, youhave some validation that the information in that domainname entry has not been

altered, that it has maintainedintegrity from the time it was put in. this is not encryptedanything. is simply a question ofdigitally signing things to make sure that the informationis as valid as it was when it went in the first place. those things are all underway now in the internet. and they may have an impact onevery one of us that are involved in building systemsthat run on the internet.

i'd like to just quickly gothrough a couple of other points here. one of them is that intellectualproperty handling in the online environment isbecoming quite a challenge. digitized information is easyto copy, and it's easy to distribute. the philosophy behind copyrightlaws, including the bern convention that wasdeveloped here in switzerland, says that physical copies ofthings are what we are

concerned about. and the difficulty of copyingor the cost of copying physical objects is what hasmade that particular law [inaudible] well. but in the presence of digitalversions of these things, it's turning out to be muchharder to enforce. it may very well be that we needto back away and rethink what copyright means in thisonline and digital

environment. there are alternatives thathave been suggested. creative commons is one of them,celebrating its fifth anniversary this year, that mayfind alternative ways of compensating authors or ofletting authors say whether they want to be compensated orin what way they want to be compensated for theirintellectual property that's been put into this onlineenvironment. tim berners-lee has worked forsome time and spoken often

about the semantic web. this is an idea that allows usto interact with the content of the network, not just withstrings, but with some notion of the meaning of the strings. that project is stilla work in progress. if in fact it's possible tocodify or otherwise indicate the meaning of contents onthe net, it would be very beneficial, certainly, fromgoogle's point of view. because today, we tend tonavigate you to a document.

but what you really wantedwas answers. and in order to do a betterjob of helping you find answers, we need to understandthe semantics of what's in the net. and right now, wecan't do that. i'm becoming increasinglyconcerned about the nature of the objects that are inthe internet today. some of them are extremelycomplex. there not interpretablewithout software.

a spreadsheet, for example,is a dead thing until you actually bring it up in thespreadsheet program and interact with it. it's a very complex objectsitting on the net somewhere. you can't simply print it out. well, you can, but you get avery limited representation of the real content, meaning and,complexity of the objects. so i'm unconcerned that overtime, the contents that we put into the internet will bedependent on software to

interpret what the bits mean. that leads me to my biggestworry, which i'll call bit rot. if in fact bits are stored awayover time, and they're moved from one medium to anotheras new storage media come along, what will happenif we lose access to the software that knows howto interpret the bits? at that point, you won't knowwhat you have other than a bag of bits.

and so the question is,what we do about that? let me give you a scenario. it's the year 3000. and you've just gone througha google search. and yo've turned up a powerpointfile from 1997. so suppose you're runningwindows 3000. the question is, does windows3000 know how to interpret the 1997 powerpoint? and the chances areit does not.

and this is not arbitrarydig at microsoft. even if this were open sourcesoftware, the probability that you would maintain backwardcompatibility for 1,000 years strikes me as beingfairly low. so the question is what to doif you're thinking about information that you want to beaccessible 1,000 years from now like vellum documentsare today that are 1,000 years old. we have to start thinking abouthow to preserve software

that will be able to interpretthe bits that we keep saving. you may even have to go so faras to save the operating system that knew how to runthe application that could interpret the bits. and maybe even emulatehardware that ran the operating system that knowshow to run the application that can interpret the bits. there is very, very littleprogress made right now in that domain, somethingthat we should be

very concerned about. otherwise, 1,000 years from now,historians will wonder what the heck we did inthe 21st century. there will be nothing aboutus other than a pile of rotting bits. and that's all they will know,is that we were the rotten bit generation. and i'm sure that's not what wewant them to know about us. ok, last update.

and i have a whole bunch ofquestions here that people have asked already. and i'll try to answersome of them. this project that i'm aboutto tell you about is not a google project. google lets me have timeto work on it. but i don't want you to walk outof this auditorium saying, hah, i've figured out whatgoogle's business plan is. it's going to take overthe solar system.

that's not what this is about. this is about supporting theexploration of the solar system using standardizedcommunication protocols. because historically, we havenot standardized these communication systems inthe same way that we've standardize communicationin the internet. now, we all know we'vebeen exploring mars using robotic equipment. usually, to communicate withthe spacecraft, we use the

deep space network, whichwas developed in 1964. these are big 70-meter dishesin goldstone, california, madrid, spain, and canberra,australia. there are also adjacent to35-meter antennas as well. half of one is kind of visibleover the right hand corner of that image. so as the earth rotates, thesebig deep space dishes are rotating along and seeing outinto the solar system, able to communicate with spacecraft likethis one, which may be in

orbit around a planet or flyingpast an asteroid, or in some cases actually landing onthe surface of a planet like the rovers in 2004. one thing that you might notknow is that most of the communication protocols that areused for these deep space missions are tailored to thesensor platforms, the sensors are on board the spacecraftplatforms, in order to make most efficient use of theavailable communication capacity, which is oftenfairly limited.

the rovers that went onto marsin the beginning of 2004 are still running, which is prettyamazing, considering their original mission timewas only 90 days. so they're still in operation,although one of them-- i forget which one-- has abroken wheel, and it's kind of dragging furrows inthe martian soil. but they're still operational. one of the problems that showedup, though, very early in the rover mission is that theplan was to transmit data

with the high gain antenna. there's a thing that lookslike a pie tin on the right hand side. that was the high gain antennathat was supposed to transmit data straight back to thedeep space network from the surface of mars. when they turned the radios onand started transmitting, they overheated. and it happened on bothspacecraft, so it

was a design problem. so they had to reduce the dutycycle to avoid having the radio damages itself, whichreally drove the principal investigators crazy. because 28kilobits wasn't very much to begin with. and now it's less frequenttransmissions. so the guys at jpl figuredout a way to essentially reconfigure systems so that thedata could be transmitted from the rover upto an orbiter.

and there were four orbitersavailable around mars. they were reprogrammed in orderto take the data up on a 128 kilobit radio, a differentradio, which didn't have very far to go. so the signal to noise ratiowas high enough to get much higher data rates. then the data was stored inthe orbiter until it got around to the point where itcould transmit the data to the deep space net.

and once again, it couldtransmit at 128 kilobits a second, partly because theorbiters had bigger solar panels and more power availablethan the rovers on the surface. so the net result is that allthe information that's coming back from the rovers is goingthrough a store and forward system, which, of course, isthe way the internet works. this reconfirmed an idea thatmy colleagues at the jet propulsion lab and i have beenpursuing since 1998.

and that's the definitionof protocols to run an interplanetary extensionof the internet. basically, we assume we'rerunning tcp/ip on the surface of the planets and inthe spacecraft. those are low latencyenvironments. tcp/ip works very well there. we thought we could get awaywith running tcp/ip for the interplanetary part. that idea lasted about a week.

it's pretty obvious whatthe problem is. when earth and mars are farthestapart in their orbits, they're 235 millionmiles apart. and it takes 20 minutes one wayat the speed of light for a signal to propagate. and you can imagine how flowcontrol will work with tcp. you'd say, ok, i'mout of room now. stop. the guy at the other end doesn'thear you say that for

20 minutes. of course, he's transmittinglike crazy. and then packets are fallingall over the place. it doesn't work. to made matters worse,there's this thing called celestial motion. the planets have this nastyhabit of rotating. and so you can imagine you'retrying to talk to a rover on and after a while, it rotatesout of sight and you can't

talk to it until it gets backaround to the other side. so the communicationis disrupted. so we concluded very quicklythat we were faced with a delay and disruption problem andneeded to invent a set of protocols that built that intoits assumptions, which were frankly not part of the internetassumptions, the tcp/ip protocols. so we developed a setof protocols. we've been going throughtests of them.

we've got to the point nowwhere we have publicly available software. it's all up on thedtnrg website-- delay and disruption tolerantnetworking research group-- dot org. and that protocol has now beentested in terrestrial environments. we picked two. the defense department, darpa,funded the original

interplanetary architecturework. and after we got done andrealized this dtn thing was a serious problem, we realizedthat they had a problem in tactical communication, thesame problem, disruption, delay, uncertain delay. so we went back and said, wethink you have a problem. and we think you could use thedtn protocols for tactical military communication. so they said, ok, prove it.

and we said, ok. we went off, andwe built some-- these motes that come fromberkeley, the little linux operating systems. we built abunch of motes and put the dtn protocols on board. and then we went tothe marine corps. and we said, ok, we'd liketo test this stuff out. what application wouldyou like to run? and they said, chat.

i said, are you kidding? you're sitting here with bulletswhizzing by and you're going like this? and they said yes, because chathas the nice feature that when you reconnect, all theexchanges that took place are then given to youthat you missed. and so you get back in sync withthe other people who are part of this communicationsenvironment. so we said, well, ok.

so we did that. we implemented it. we went out to northern virginiato a test deployment of these things withthe marine corps. and it worked. and so i thought that was prettycool, demonstration of dtn working terrestrially,something useful coming out of all this. and the next thing i knew,they'd taken all of

our stuff to iraq. and i said, wait a minute,it's an experiment. and they said, no it isn't. and off they went. so we said, all right, fine. then we thought, well, let'stry this in a more civilian environment as well. some of you, i'm sure, arefamiliar with the fact that the reindeer herders, the samiare in the northern part of

sweden, and finland, andnorway, and russia. and they are pretty isolated,because they're so far north. and satellite communication isa problem because the dishes are, bang, right thereon the horizon. so we said, well, what wouldhappen if we stuck a laptop with 802.11 in it andthe dtn protocols in all-terrain vehicle? and we put wifi servicein the villages. so we tried one village.

and we tried this randominteraction with the system using dtn. that was last year. so next year, we're going to trya multiple village test of the dtn protocols tosee whether works. and if it works out well enough,then maybe we'll put these things in the snow mobilesso that it'll work both during the summerin the winter. so we're very happy that we'vegot terrestrial examples of

the use of the dtn protocols. we're now the point where we'reready to start space based testing. 2009, we're hoping to put thedtn protocols on board the international space station. and in 2011, nasa has offeredto allow us to put the protocols on board the deepimpact spacecraft that already completed its primary, missionwhich was to launch a probe into a comet and gatherdata back.

but it's still out there, andit's still functioning. so somewhere around 2011,we hope to space qualify the dtn protocols. and after that, adrian hooke,who's my counterpart now at nasa, is also chairman of theconsultative committee for space data systems. and we hopeto introduce that as a standard protocol for usein space communication. so what we're expecting, ifwe're lucky, is that the space agencies around the world willadopt this is a standard.

they'll use it for every missionthat they launch. and that means that every timeyou launch a new mission, any previous mission assets that areavailable can become part of the support structure forthe newly launched mission. what will happen as a resultis it will accrete an interplanetary backbone over aperiod of decades as more and more of these missions getlaunched in the system. so me let me stop there andthank you again for taking all this time.

and we'll see whether we cananswer a few questions here if that's ok with you. so let me-- these are questions thatapparently were submitted by many of you here. and i'll answer a few. and then we'll see if weget some immediate ones from the floor. first company says, for manycompanies, the breakdown of

the internet for, say, threedays would lead to a substantial damage. question, is it possible toestimate the probability of such a breakdown of theentire internet? and if so, how couldit be down? and are there any numbersavailable? or maybe how could it be done? i think the answer is i'mnot going to tell you how it could be done.

the answer is that theprobability that the entire internet could be taken downseems to be pretty small. there have been plenty ofopportunities over the past 20 years or so since the 1983rollout of internet to destroy it in one way or another. and in spite of the fact thatdenial service attacks are by far one of the most seriousthreats to internet stability, it seems unlikely thatthe entire internet would be taken down.

i will observe, however, thatwe manage to shoot ourselves in the foot fairly regularlyby mis-configuring things. so if you mis-configure therouting cables of the routers, you can easily damagesignificant parts of the internet. and we seem to do thatwith more frequency than we would like. but i think on the whole, therobustness of the system has been pretty substantial.

that doesn't mean we shouldn'tbe introducing increasing amounts of security mechanismsinto the network in order to limit that risk. what's the biggest fallacyabout the internet? well, one of them is thatal gore invented it. he didn't invent it,but he did have something to do with it. the other big fallacy is somepeople think the internet happened because a bunch oflocal area networks got

together one day andsaid, let's build a multi-network system. the fact is that it started withwide area networks and took a long time. will there ever be secure webapps based on a browser alone? would apps be more secureif run in applets? boy, that's a reallygood question. right now, the most vulnerablepart of the internet world is the browser.

browsers ingest java code orother high level codes, and they often are unable to detectthat this code is actually trying to take overthe machine, or install a trojan horse, or do someother damaging thing. if we collectively were toinvest anything at all, i think we should be investing inbuilding much, much smarter browsers that are able to defendagainst some of the dangerous downloads. should the evolution ofthe basic protocols

such as http go-- or where should it go in orderto support more asynchronous interactions of web 2.0? and a related question,is there too much overhead in http? well, first of all, i'd saythat we should not depend solely on http as the mediumof interaction on the net. peer-to-peer applicationsare really interesting. and although some of them getabused for copying and

distributing material that'scopyright, in fact, they often are a very efficient way oflinking people who want to communicate in the network,either peer-wise or in multiple groups. so my reaction right now tothis one is that we really should be looking atasynchronous, peer-to-peer kinds of interactions inaddition to the more classical http designs. can the world wide web helphumanity to become a just,

democratic society? well, the short answerthat is no. probably not, although to bereally honest and fair, the internet probably is the mostdemocratic communication system that we've ever had,because it allows so many people to introduce content,and to share it, and exchange it. but humanity is what it is. shakespeare keeps tellingus about them.

that's why the plays arestill so interesting. so for humanity to become a justand democratic society is going to take a fair amount ofadjustment of the human beings who make up that society, notjust the technology that surrounds them. authentication on the internettypically means complete identification. how can the privacy ofusers be protected? the answer is that weneed to do both.

we need to have anonymousaccess to the internet. and we also need to have theability to do strong authentication. and the reason that you want todo both is that sometimes it's important tobe anonymous. we all understand you can abusethe anonymity, and do bad things, and stalk people,and say things that are not true. but there are also times whenit's important to be anonymous

in order to allowwhistleblowing. on the other side, there aretransactions that we want to or engage in for which we reallydo need to know who the other party is. and so we want strong mechanismsto allow people to validate each other tothemselves, or to validate the website to you. and the same time, we also haveto support anonymity. and i think we need both.

i'll tell you what. there's a list of almost 23three questions here. and i have the feeling thatthere are probably people in the audience who would like toask some questions that they didn't ask ahead of time. so let me stop with thepre-asked questions and ask if there's anybody who wouldlike to ask a question live from the floor. there's a microphonedown here.

and if there's a brave soul whowants to ask the question, i promise i won't spit. and if there aren't any, i'llbe happy to either go on or maybe run off the stage. let's see. when do you think ownershipof the top dns will be transferred from the usgovernment to an international organism such as the un? well, i'll be honest and sayi hope it doesn't get

transferred to an international organism such as the un. i think that transfer of theicann operation to a multilateral organizationwould politicize it. i'd point out that icannis a multi-stakeholder organization, which means thatgovernments, the private sector, the technical community,the general public have access to the process ofpolicymaking for the domain name system and internetaddress allocation.

it would be much better for thisprocess to end up in a multi-stakeholder structure, nota multilateral structure like the un. the internet governance forumjust finished its meetings in rio de janeiro. it too is a multi-stakeholderorganization. and the conversations thattake place among those different stakeholders, i think,are extraordinarily illuminating when it comes toseeing what the different

perspectives are about policy. so i hope that the answer is,first, it doesn't end up in a multilateral group, but rathera multi-stakeholder one. and i would suggest that thereisn't very much more to be done to extract the usgovernment the role here. all it does right now in thedepartment of commerce is to validate that icann has followedits procedures to do delegations of toplevel domains. and that's all it does.

it's never forced anydecisions on icann. it's never rejected anyrecommendations that icann has made. this isn't to say that peoplewould like to see the us, many of them, not have thisspecial relationship. and i hope that sometime,certainly, in 2008, there's an opportunity to revisit thatrelationship when we review what's now called the jointproject agreement between the department of commerceand icann.

ok. well, let me stop there. and if there aren't anyquestions, i think maybe we can wrap up. i don't know whether you wouldlike to say a closing benediction. randy knaflic: no. i just would like to saythanks to vint cerf.

Post a Comment for "hybrid suv south africa"