CFP2001: The Future of Computing, Freedom and Privacy

and

Library Hi Tech News

ISSN: 0741-9058

Article publication date: 1 June 2001

377

Citation

Glover, B. and Meernik with an introduction by Barbara Glover, M. (2001), "CFP2001: The Future of Computing, Freedom and Privacy", Library Hi Tech News, Vol. 18 No. 6. https://doi.org/10.1108/lhtn.2001.23918fac.003

Publisher

:

Emerald Group Publishing Limited

Copyright © 2001, MCB UP Limited


CFP2001: The Future of Computing, Freedom and Privacy

Barbara Glover and Mary Meernik with an introduction by Barbara Glover

The snowflake dancing on the home page of the 11th annual Conference of Computers, Freedom and Privacy <www.cfp2001.org> might have been placed there to symbolize the major blizzard that hit New England March 6, delaying the start of the conference and preventing many from reaching Cambridge, MA. Regardless of the storm, the 2001 conference program was quite skimpy compared to that of the celebratory 10th conference held in the year 2000, with the number of panels, speeches, and tutorials reduced by one-half or more. Audiotapes of the nine panel sessions and two speeches are available from Fleetwood Multimedia, Inc. <http://www.fltwood.com/onsite/cfp/>.

ICANN (Internet Corporation for Assigned Names and Numbers) and UCITA (Uniform Computer Transaction Act) were two of the hot topics this year along with the cyber-treaties being proposed by G-8, the Council of Europe, and the Hague Conference on Private International Law. The following concerns were among those voiced at the conference:

  • powerful forces are working to tame and control the Internet;

  • consumers are losing out;

  • the principles of free speech and fair use are being threatened;

  • there is a growing movement to lock down information;

  • digital devices with surveillance capabilities are proliferating.

While some speakers worry that officials with no understanding of technology are creating policy, others worry that technologists who have no understanding of the social implications of technology are drafting protocols. Fortunately, both sides do meet at CFP once a year to talk with each other. Offsetting fears of the FBI's Internet surveillance system called Carnivore or DCS1000 and potential fears of being sued for Internet activity in courts throughout the world were the following good reports:

  1. 1.

    there have been significant improvements in the information technology products available to the human rights community;

  2. 2.

    the IETF (Internet Engineering Task Force) is continuing to favor privacy over convenience; and

  3. 3.

    the number of corporate chief privacy officers is increasing.

Awards have become part of the CFP agenda. Only one click from the CFP2001 home page, "librarians everywhere" can learn that they were honored with an Electronic Freedom Foundation Pioneer Award at the 2000 conference <http://www.eff.org/awards/pioneer.html> for making "a substantial contribution to the health, growth, accessibility, or freedom of computer-based communications." Unfortunately, no librarians appear on panels or raise their voices in question and answer sessions at CFP2001. CFP should be of great interest to librarians because it advertises itself and seems to be generally recognized as "the leading policy conference for exploring the impact of the Internet, computers and communications technologies on society." Moderators and panelists often represent organizations that fight for the same rights and freedoms librarians are most concerned with. The 2002 conference is scheduled for April 16-19 at the Cathedral Hill Hotel in San Francisco. Its Web site is already established at <www.cfp2002.org>.

UCITA or Have You Ever Read a Shrink Wrap License?

Andy Grosso, Attorney at Law (moderator)Carlyle Ring, Ober, Kaler, Grimes and ShriverCem Kaner, Department of Computer Science, Florida Institute of Technology

Moderator Grosso sets the tone for this session by listing ten variations on the title of the program, including "He cheata, she cheata, we all need UCITA," "I come not to praise UCITA but to bury it," and "You need a UCITA, or do you?" Ring served as chair of the committee of the National Conference of Commissioners on Uniform State Laws that drafted the Uniform Computer Transaction Act (UCITA). Despite many years of volunteer service as a commissioner, he claims he would not have agreed to perform this particular "thankless job" if he had known what he was getting into! Panelist Kaner worked diligently to improve the Act while attending 15 of the 16 three-day UCITA drafting sessions but confesses that he has since become a member of the opposition.

The National Conference of Commissioners on Uniform State Laws was formed 112 years ago to draft and promote the adoption of uniform state laws whenever that would be deemed useful and desirable. Existing uniform laws provide for the recognition of bank checks across state boundaries, facilitate participation in the anatomical donation program, enforce child support orders from state to state, and discourage "child kidnapping" by non-custodial parents. According to Ring, UCITA grew out of a perceived need for common rules for negotiating contracts on the Internet. Because contract law is state law, UCITA was originally proposed to be a clause of the Uniform Commercial Code. However, UCITA became an orphan when the American Law Institute (which serves as co-author and co-approver of any changes to the Uniform Commercial Code) walked out of deliberations. By the March date of the CFP conference, only the states of Virginia and Maryland had adopted UCITA.

UCITA has many opponents. Kaner opposes the Act for the following reasons:

  • its application of "the licensing paradigm to sale-like transactions that involve consumers and others who do not have much negotiating power;

  • its "adoption of a post-sales structure for presenting terms of contracts;"

  • its lack of accountability for software publishers who knew about defects and bugs in their product at the time of sale; and

  • the difficulty of upholding warranties implied in a shrink-wrap contract.

He charges that "consumer protection rules pretty well get wiped out under UCITA ­ you're no longer buying goods, but an intangible license to use something." UCITA even contains non-disclosure provisions under which software publishers could interfere with the publication of reviews of their products. According to Kaner, "UCITA is one of the most complex statutes that's been drafted in a century," so complex that many attorneys oppose it because they cannot figure it out! The American Intellectual Property Law Association as well as the New York City Bar Association's committees on copyright and on entertainment law agree that UCITA gives too many additional contract rights to publishers by providing "an end run around the 'fair use' and 'first sale' doctrines of the Copyright Act." These organizations are concerned about the adverse impact UCITA would have on competition. Although continuing to perceive a need for "coherent and predictable rules to support the contracts that underlie [the information] economy," the American Law Institute admitted in a recent press release that "it has become apparent that this area does not presently allow the sort of codification that is represented by the Uniform Commercial Code (UCC)."

Through a Glass Darkly ­ The Hague Convention

James Love, Consumer Project on Technology (moderator)Jonathan Zittrain, Harvard Law SchoolPatrick Wautelet, Harvard Law School

Wautelet provides some historical background for this panel. During the past 100 years or so, the Hague Conference on Private International Law has drafted more than 30 treaties in the field of private international relations. It is a highly specialized organization with 49 member countries, but no more than a dozen employees. The US delegation to the conference consists of academic and governmental employees as well as representatives from non-governmental organizations such as lawyers' associations and human rights groups. Since 1992, the Hague Conference has been developing a treaty entitled the Hague Convention on Jurisdiction and Foreign Judgments in Civil and Commercial Matters. The draft Convention can be found on the Internet at <http://.hcch. net/e/conventions/draft36e.html>. It covers cross-border commercial disputes and contains many rules as to where you can sue and be sued. It does not deal with patents, tax law, criminal law, or litigation by governments.

Love is amazed that so few people seem to know about this convention which would require countries to enforce court judgments entered in other member countries. Fortunately, the treaty does contain a "public policy exception" which would allow a government to opt out of compliance if it determines that a particular judgment is contrary to fundamental values of its legal system. Europe has already made a great deal of progress in developing jurisdictional rules, but it has the advantage of being fairly well integrated economically and super-governmentally. On the other hand, the Hague Convention will attempt to impose some harmonization upon a much wider array of nations possessing far different legal traditions. Love worries that the Hague Convention could lead to a "meltdown" of the free speech and fair use rights Americans are accustomed to as the courts will uphold only those rights that exist in every single member country. Therefore, arguing to put "people before property," he states, "Before creating a global convention to enforce property owners' rights across borders in an Internet context, we first have to have a global convention to protect the public's rights in intellectual property and speech."

The Internet has blurred boundary lines since work started on this convention in 1992. Back then, you could choose whether or not you wanted to do business in a country and it made sense that you should be held liable if you went to another country, hurt someone, and then left. Nowadays nearly every Internet transaction has the potential to develop international ramifications. Under the proposed Hague Convention, people who develop and mount free software on the World Wide Web might be sued and have to defend themselves in different parts of the world. Zittrain proclaims it unreasonable to expect that everyone become familiar with the laws of every country that might sign the Hague Convention. He expects that technologists will soon develop useful tools, such as programs that would allow Internet users to block incoming or outgoing material from particular countries.

The ICANN Election and Beyond

Barbara Simons, Association for Computing Machinery (moderator)Diane Cabell, Berkman Center for Internet & Society, Harvard Law SchoolBrad Templeton, chairman, Electronic Freedom FoundationPeter G. Neumann, SRI International

This panel was handicapped from the start because three of the four scheduled speakers were unable to participate. Fortunately, well-qualified audience members abound at CFP and Moderator Simons managed to convince Templeton and Neumann to join the panel at the last minute. Not one of the speakers or audience members is happy with ICANN (Internet Corporation for Assigned Names and Numbers) or with ICANN's dispute resolution procedures. And their unhappiness has definitely skyrocketed since the incomplete November 2000 election of nine at-large representatives to the 18-member board. Following the Clinton administration's agreement that ICANN should be opened up to competition by allowing one-half of the board to be elected from at-large Internet users, Cabell spent one and one-half years serving on the special study committee that painstakingly designed the election process. Moderator Simons herself spent three months on her candidacy for one of the at-large board seats. Since then, the board has appointed yet another committee, a "clean sheet" committee that seems to be moving in the direction of invalidating the entire election process. Meanwhile, the ICANNWatch organization <www.icannwatch.org> claims to be monitoring developments on behalf of at-large Internet users.

Neumann focuses on the "larger-than-naming" challenge of identifying international strategies to provide effective overall Internet governance, claiming that ICANN needs to be one part of the governance picture. He works to influence solutions through the organization named People for Internet Responsibility <www.pfir.org>. Moderator Simons reads some comments from missing panelist, Michael Froomkin of the University of Miami School of Law. Froomkin hopes that ICANN will eventually be limited to technical or mechanical procedures, leaving policy decisions to overall Internet governing bodies. Until that time, however, he says ICANN desperately needs elected representatives on its board because so many policy decisions will arise and need to be addressed throughout the process of establishing the newly announced global top level domain names.

IETF Standards Landscape

Scott Bradner, Internet Engineering Task Force

The Internet Engineering Task Force is a very open organization operating under the aegis of the Internet Society. More than 2,800 people attended the two-week long December 2000 meeting in Minneapolis, an enormous increase since the Task Force started its work in 1986 with only 21 attendees. The IETF is organized into 131 working groups within eight broad areas. Each group hosts a forum for the collection of comments from the public. IETF standards are adopted by consensus at meetings ­ that is, hands are raised but votes are not counted.

Always concerns of the IETF, security and privacy have been receiving increased attention lately. The IETF has adopted standards and publicized "best current practices" for cookies, but Bradner points out that no one has the power to enforce IETF standards. Although willing to publish wire-tapping specifications, the IETF decided through the consensus procedure that working groups should not be forced to add wiretapping capabilities to their protocols. More and more of the working groups are taking up telephony issues such as spatial location, caller identification, and encryption. Bradner discusses the constant challenges the organization deals with in balancing technological capabilities and business interests with concerns over social implications. The IETF prefers to "keep the balance in favor of privacy, not convenience" by ensuring "that the tools are there to make people as anonymous and private as they want to be."

Information Technology in the Service of Human Rights

Matthew Zimmerman, American Association for the Advancement of Science (moderator)Patrick Ball, American Association for the Advancement of ScienceWendy Betts, ABA Central and East European Law InitiativeJames Fructerman, The Benetech Initiative

Non-governmental human rights organizations work in various parts of the world to monitor compliance with the Universal Declaration of Human Rights that was adopted by the United Nations in 1948. These organizations collect, organize, analyze and disseminate information about abuses of human rights. This CFP program focuses on various information technology tools being developed to assist them in this process.

Benetech, a ten-year-old non-profit company based in Silicon Valley, has developed an open source software product named Martus. Martus is a text management program including e-mail with strong encryption features. It empowers grass-roots human rights workers with low-level technological skills or equipment to easily capture and distribute information on human rights violations. As soon as a Martus user connects to the Internet, previously recorded information on violations is broadcast to servers around the world.

Betts has been developing a next generation database for organizing and classifying human rights data. She emphasizes how complex that data can be by pointing out that a single incident frequently involves "multiple violent acts, multiple victims ... multiple perpetrators, [and] multiple roles ascribed to the same individual [i.e.] a person can be a victim, a perpetrator, and a witness." The database must accurately reflect all these possibilities with their interconnections. Clearly understood definitions are necessary in order to avoid ambiguity. Ball talks about the challenges involved in analyzing massive quantities of information so that the data will provide a reliable description of the human rights scene in a particular region.

Panelists emphasize the importance of providing human rights workers with open source software that is standardized, widely available, secure, and easy to use while being sophisticated in its ability to distinguish levels of detail. According to Betts, the hope is that information technology will facilitate "more sophisticated analysis of a larger pool of data than has been possible previously" and "the results of these analyses in turn [will] work to raise international awareness of and action against threats that human rights violations pose to peace and freedom." Accountability and deterrence are the goals.

Cyber-crimes and Cyber-rights: the Council of Europe and the G-8

David Banisar, Privacy International (moderator)Gus Hosein, London School of EconomicsJim Halpert, Piper Marbury Rudnick and Wolfe

It is extremely unfortunate that no law enforcement representatives would agree to appear on this panel, but perhaps it is indicative of the growing divide between privacy advocates and the law enforcement community. In fact neither the Council of Europe nor the G-8 sought any input from privacy groups in formulating their cyber-crime recommendations.

Hosein opens the discussion with a review of the process behind the G-8 and Council of Europe initiatives. There has been a tremendous push to secure international cooperation in fighting high-tech crime in such areas as terrorism, denial of service attacks and child pornography. The position of the G-8, the group of eight industrialized democracies, is that "we cannot allow safe havens to exist" for cyber-criminals. Accordingly, these countries are seeking to work jointly with industry to design systems that will help prevent and detect network abuse. The Council of Europe's 41 member states are pushing for governments to enact laws based on a common standard for criminal conduct, a broad ability to investigate computer crimes, and international cooperation, including extradition, in the investigation and prosecution of crimes.

Banisar characterizes the Council of Europe process as a "complete joke," pointing out that 23 meetings were held before any information was released to the public. The proposed treaty provides law enforcement agencies with incredibly broad powers to conduct wiretapping and surveillance activities and to confiscate computer systems. There are no safeguards built into the treaty; the Council's position is that the member states would have their own national laws. But as Banisar observes, many governments do not have civil liberties or due process procedures in place to prevent abuse.

Halpert echoes Banisar's sentiments, describing the Council's treaty as "totally asymmetrical." He discusses the offenses that made it into the final draft, including forgery of data or inputting of data that results in economic loss to another party (basically unfair competition), circumvention of security measures and equipment, child pornography, and willful copyright infringement. The definition of political offenses was moderated so as not to require international cooperation. Halpert stresses that countries can take radically different views of these offenses and the liability issues that arise. He is very concerned about the potential liability of third parties, including Internet service providers and universities. However, it is the treaty's definition of "corporate liability" which he believes "imposes the greatest chill on expression." Since corporations can be found liable for a lack of supervision of a criminal wrongdoer, they will have "to surveill everything employees are doing online." Halpert also argues that the treaty should make provisions for reimbursement of costs to member countries, because without some sort of financial discouragement, there would be no limit to law enforcement surveillance requests. In addition, Halpert is critical of the treaty's "take it or leave it proposition;" member countries would not have the right to pick and choose among the treaty's provisions.

Since the treaty is so close to ratification, the three panelists hold out little hope that privacy groups could have any significant impact at this point. Banisar believes that after ratification opponents of the treaty will have to take a country by country approach to try to moderate some of its harsher aspects. Halpert credits the US Justice Department with being responsive to privacy concerns and believes that US ratification of the treaty is not a foregone conclusion.

The Great Ballot Debate

Lorrie Cranor, AT&T (moderator)Peter Neumann, SRI Computer Science LabJim Adler, Votehere.netStephen Ansolabehere, MITDave Del Torto, CryptoRights Foundation

After the debacle of the presidential election, it is refreshing to have CFP lend some perspective as to whether technology can solve the problems with voting equipment and procedures. Cranor opens the session by describing the different types of voting systems: hand counted paper ballots, mechanical lever voting machines, punch card voting systems, optical scan voting systems, direct recording electronic systems, and such Internet variations as precinct, regional, kiosk and remote voting.

Cranor then poses questions to the panelists, who in this mock scenario are advising the governor of Florida on how to improve the voting process. Neumann stresses that a good voting system should reconcile auditability and privacy. "The fundamental requirement must be integrity in the process and that the voter must have some sense of privacy." However, he emphasizes that it will always be possible, regardless of how perfect the software code is, to rig or subvert an election.

In response to Cranor's question about whether any system can even come close to end-to-end integrity, Adler concedes that the voting problem is incredibly challenging and that "there is no silver bullet." His company Votehere is attempting to open up the process to public scrutiny by disclosing the cryptographic protocols and source code for its own voting systems. He stresses, however, that technology is only one piece of the puzzle; other issues, including system certification procedures, education of election administrators, poll workers and voters, and the business practices of elections, must also be addressed.

Ansolabehere is asked whether Florida should seek out a computerized system or stick with old technology. He points out that elections across the country are conducted "on a shoestring" with the goal always being to speed things up and to lower costs. He complains that voting systems are "not designed with the human interface in mind" which leads to significant and widespread voter mistakes. His research has shown that systems with higher reliability are paper ballots manually counted, lever machines and optically scanned paper ballots. Systems that have proven less reliable in terms of overvotes and uncountable votes are punch card systems and electronic machines.

In response to Cranor's last question about the prospects for Internet voting in the near future, Del Torto posits that the best solution would be a system that is "extremely low-tech so it could be used worldwide." He would like to see a system that is not commercial, is capable of real-time audits and has an easy interface that can accommodate any type of voter handicap. During the question-and-answer session, other potential problems with Internet voting were discussed, including lack of equitable access to computers, denial of service attacks that could cripple the voting process, the possibility that designers of the voting software could be bribed, and security issues because people would no longer be voting in a controlled environment.

Gadgets That Spy

Richard Smith, Privacy Foundation

For some years, computer users have known that their Web traffic and e-mails could be tracked and monitored by various entities. Now, as this session makes surprisingly evident, the Internet is no longer "PC-centric" but is moving to a wide range of consumer devices. Smith discusses various gadgets which he categorizes as follows: "direct marketing data pumps," biometric devices, amateur private investigator gadgets, and future technologies. The first group includes such products as an exercise monitor, a barcode reader that scans advertisements, a device that allows the user to track when songs are played on the radio, and a mousepad that records the user's favorite Web sites. These Web-enabled devices are all designed to send feedback to vendors to help them sell more products to the customer. Even though the bursting of the e-commerce bubble might have been expected to reduce the appeal of such devices to marketers, Smith stresses that the technology is so inexpensive that the concept is still attractive. Biometric devices include fingerprint authenticators, face recognition technologies and tollbooth payment systems that are tied to the driver's bank account. Even though such technologies do have legitimate security applications and convenience advantages, their use raises important concerns about storage, retention and access to the data by third parties. Private investigation tools include global positioning system receivers that record every place the device has been, wireless Web video cameras, watch cameras, keyboards that record the user's keystrokes and workplace surveillance systems such as packet sniffers. Future technologies include CD business cards that contain self-extracting programs and pens with transmitters that send everything the user writes to a computer. Although much of this technology is still "bleeding edge," Smith stresses, "most people don't understand how far this has already gone." As we ponder a future where more and more everyday devices have surveillance capabilities, it is critical that we start work now to get the necessary data protection and privacy laws in place.

Chief Privacy Officers: Boon or Boondoggle?

Evan Hendricks, Privacy Times (moderator)Jason Catlett, Junkbusters (interlocutor)Stephanie Perrin, ZeroKnowledge SystemsNuala O'Connor, DoubleClick

Are chief privacy officers (CPOs) merely the latest public relations phenomena or will they actually play a unique enough role to justify their existence five years down the road? This session's panelists wrestle with what the rationale is behind the recent flurry of CPO appointments and with how the CPO functions within the organizational structure. Moderator Hendricks, while acknowledging the need for "someone watching the store," questions whether CPOs will be "free to challenge corporate mindthink on a wide range of issues." Perrin views the CPO role as essential, observing that privacy is not a "black and white thing; the more you get the more complicated it becomes."

Much of the discussion then centers on DoubleClick's embrace of the CPO concept after strong criticism forced the company to cancel plans to merge data about people's Web surfing habits with their personal information. O'Connor, an attorney who works for the Internet advertiser's CPO, claims that the company's experience has provided it with "a bully pulpit in the industry" which it is using to educate clients about fair information practices. She urges companies to "treat privacy as a core business mission" and to recognize that "privacy should pervade every business decision." Catlett is not convinced about the purity of DoubleClick's motives in establishing a privacy department. In fact, he considers the appointment of CPOs by DoubleClick and other companies to be merely public relation ploys, claiming that CPOs are nothing more than "corporate crash test dummies." Perrin, in defense of DoubleClick, chastises Catlett and other privacy advocates who "don't applaud progress when it happens." She argues that DoubleClick deserves credit for setting up a privacy infrastructure even though it was done after the fact. O'Connor concedes that her company still has a long way to go, but that it is committed to fair information practices. She also points out that online companies are held to higher standards than offline ones when it comes to notice and informed consent requirements. Perrin laments the incredible complexities of informed consent, warning, and "consumers don't want that kind of educational burden." Consequently, the level of awareness about how privacy protections work is still really low. She urges better integration of technology with policies and laws, and O'Connor concurs, admitting that "I frequently make decisions at the company that have absolutely no foundation in law, but I know are the right thing to do."

During the question-and-answer session, the problems of passive profiling online are discussed, with an audience member making the analogy to a library where materials do not rearrange themselves based on who the patron is. Perrin agrees that online tracking and stratification of people is dangerous, emphasizing that we "need to make sure that we are not entering a world that has been decided by others."

Carnivore/DCS1000: The Name is Changed, But Are the Innocent Protected?

David Sobel, Electronic Privacy Information Center (moderator)Matt Blaze, AT&T LaboratoriesHarold Krent, Chicago-Kent College of LawMark Rasch, Predictive Systems

Considering the bite it could take out of our civil liberties, Carnivore seems an apt moniker for the FBI's Internet surveillance system. Despite the intense criticism, all the Bureau has done so far is to give the system a new innocuous name, DCS1000. In his introductory remarks, Sobel stresses that Carnivore "breaks new ground legally," giving law enforcement agencies access to all traffic moving through an Internet service provider (ISP). Because Internet traffic is bound up in packets, it is very difficult to distinguish one person's communications from another's. The potential for abuse resulting from either intentional or accidental system misconfiguration is significant.

As no official representatives would consent to be on the panel, Krent agrees to serve as the "government apologist" to explain why Carnivore was developed. Criminals engaged in terrorism, identity theft, money laundering and child pornography are "thriving because of the Web" and law enforcement agencies are forced to use systems like Carnivore to collect evidence. He points out that of the 25 to 30 times that the FBI used Carnivore last year, 25 percent of the cases were actually initiated by scam or stalking victims for their own protection.

Blaze starts from the premise that there are no legal problems with Carnivore and looks at the system from a purely technical standpoint. Like Sobel, he emphasizes how difficult it is to reconstruct Internet traffic and to minimize the amount of information collected. Because many ISPs use dynamic IP addresses, the surveillance system is not only monitoring the target of the investigation, but also any persons who have been assigned the same IP address. In addition, Blaze stresses that no two ISPs use exactly the same configuration, making it nearly impossible to prove that Carnivore is working reliably. It is also doubtful whether Carnivore can distinguish real network traffic from altered or forged packets. All these technological issues have convinced Blaze that "too much faith is being put in the evidence collected."

Rasch takes the opposite tack, assuming that the technology works perfectly and that we need only consider the legal problems. He discusses the legalities of pen register (header information) and full content searches, the two types of searches that Carnivore is designed to do. There is really no consensus, however, on what constitutes header information versus information in the body of a communication. Rasch is very concerned about the potential for abuse in conducting pen register searches that do not require law enforcement officials to show probable cause. They merely have to demonstrate that the information obtained would be relevant to a criminal investigation. Although header information is much more revealing than toll records and phone numbers, the government is attempting to apply the same legal principles of telephone wiretapping to Internet surveillance. A full content search does require a judge's approval and has multiple layers of review built into the process. It may take up to six months to obtain approval and federal officials must "demonstrate that less intrusive techniques are not available." But because Carnivore automates what had previously been a very expensive and time-consuming process, Rasch anticipates that we "will soon see many more interceptions."

During the question-and-answer session, the panelists stress just how surreptitious electronic surveillance is. Unlike a home search, which the target of an investigation has knowledge of and can challenge in court, Carnivore surveillance is conducted without informing the target or anyone else that communications are being monitored. In addition, the Electronic Communications Privacy Act does not even require mandatory suppression of electronic communications that were obtained illegally. The panelists also reserve some criticism for the Internet service providers themselves. Although there are laws against "private and public snooping," there is a "provider exception" which permits ISPs to conduct surveillance to protect them and to prevent fraud. Unfortunately, many ISPs are overstepping these legal boundaries in requiring their customers to sign consent forms giving blanket permission to monitor all Internet traffic.

Super Bowl 2001 ­ The Game Was the Least Interesting Part

Barry Steinhardt, American Civil Liberties Union (moderator)Thomas Colatosti, ViisageSamir Nanavati, International Biometric GroupSimson Garfinkel, journalist, entrepreneur and authority on computer security

In his opening remarks, Steinhardt characterizes the capturing of all attendees' images at the 2001 Super Bowl as the "first high profile marriage between face recognition technology and video surveillance" in the USA. Video surveillance systems are becoming increasingly pervasive and now incorporate infrared and radar technologies that enable monitoring in darkness and behind walls. Steinhardt warns that these surveillance systems, which are operated predominately by men, disproportionately target minorities and women. He worries that applications like the Super Bowl surveillance will result in a "nearly universal photographic database" that could be used to identify us in all sorts of circumstances. In other words, we have basically lost our Fourth Amendment protection against unreasonable searches and seizures.

In defending his company, which developed the system used at the Super Bowl, Colatosti insists he is "committed to use technology to improve personal privacy." He is frustrated at the "outrageous hyperbole" that has characterized the Super Bowl incident, pointing out that people have accepted being tracked in public places for security purposes. He stresses that captured images that did not match on the database supplied by law enforcement were not retained; consequently, his company knows absolutely nothing about those thousands of people at the Super Bowl. He claims that face recognition is the most convenient, least intrusive and most cost effective biometric for identification and security applications. He also argues that there is nothing private about a person's face and that "the visible public face cannot be equated with personal private information."

Nanavati emphasizes that "both the protection of privacy and the erosion of privacy" is inherent in the use of any biometric. Because biometric technologies have the potential to be used incorrectly or with bias, depending on the operator, it is essential to build responsible applications. He distinguishes between one-to-one systems, used to confirm identity, and one-to-many systems, used for surveillance. Research done by his company has demonstrated that once people have actually utilized a one-to-one biometric system and understand how it works, their acceptance rates exceed 90 percent (compared to a 60 percent disapproval rating prior to using the system).

Following up on Nanavati's comments, Garfinkel points out that one-to-one systems used for access control are much more reliable than one-to-many systems. One-to-many systems typically utilize face recognition technology that Garfinkel characterizes as "sort of muddy." Nevertheless, "we are building a complete surveillance infrastructure and face recognition is clearly part of it." Garfinkel also warns that there is always a backdoor to any biometric system and we need to be concerned about who controls that access. Auditing procedures are essential because it is inevitable that false data will be introduced either intentionally or accidentally. He predicts that biometric systems will soon be retaining everything they capture because that is "the natural way technology works." Consequently, we are in a very vulnerable position because there are no laws or fair information practices dealing with the use of biometrics.

Barbara Glover (barbara.glover@emich.edu) is Federal Depository Librarian and Cataloger, andMary Meernik (mary.meernik@emich.edu) is Cataloging Librarian, Bruce T. Halle Library, Eastern Michigan University, Ypsilanti, Michigan.

Related articles