The way it done The way the licensed content market works now: Licensee (University) negotiates on behalf of its users. Generally seeks the most content for the lowest price. Few distinctions in quality of access: usually full-text. Contracts not written in binary-encodable fashion.
And the goods to whom? AuthZ based on institutional affiliation. IP ranges, proxies, vpns, etc. Largely anonymous... Some publishers seek user profiling, but can fake: ✔ Sumie Ishida ✔ 34-year old female in Denver ✔ Biology grad-student (NOT!)
Flat landscape Currently emphasis is on rapid access. Local resolution services (appropriate copy). Users want full-text, fast, with minimal mediation. The market for extra toppings doesn't substantially exist.
Price of privacy In essence, content licensees (universities) mediate monetary transactions for their users. Markets exist for many things though, and there are other media for exchange besides money. Shibboleth enables:... content providers to purchase personal (profile) data... through greater control of information delivery The exchange medium is privacy.
Knowledge gets smaller Interesting new scenarios become possible. “If you wanted to see the index with link resolution, Kindly provide us the following information.” Shibboleth enables:... increasingly small amounts of information... to be differentiated for access control Commodification of access to knowledge.
New delivery options Like most technology, Shibboleth is good and bad: New types of content, with new options... but new trade-offs. Tiered information access is associated with an increasing surrender of personal identity. Maybe Shibboleth is about identity release - not enabling users' control of their privacy.
Shibboleth abets DRM Have you ever bought an ebook? Elsevier, eBook Forum, and DRM: an example Could Shibboleth facilitate or encourage a world where every journal article or segment of information is wrapped in DRM? In a true trusted system, users are not trusted, and machine communication abstracts the user, but the user must identify themselves to play. As trusted systems mature, e.g., MPEG REL, is Shibboleth a missing piece?
Only a dog once Content providers might request a profile fill-out. Yet use-history might generate permanence in e-identity. With persistence in passed IDs, if one releases profile information once, it’s gone. Maybe you can only be a dog once. Sumie Ishida once, Sumie Ishida forever. Should universities advise users that false profiles are a viable option, when they are offered in place of attribute requests?
Users rights to privacy: concepts Assuming some users release additional information... What privacy rights will users have to this information? Will they be able to reclaim their privacy? Anonymize their history? Who owns browsing logs and alerts? Already there are legal precedents for release of information. (Patriot Act only must recent example.) What might hold for article requests or SDIs?
User rights to privacy: examples ➢ Video stores maintain a history of rentals. ➢ Powells bookstore maintains my purchase history. ➢ Can't say “Oops, a mistake. Didn’t read or look at that.” ➢ Amazon allows me to edit my “Might be interested in” books. Users can spam auto-generated profiles. (Kludged privacy control) What will be the range of options for privacy-editing?
Better world through advertising Imagine other obvious (and tacky!) scenarios. If I have released some profile data, could a publisher sell my targetID and my custom content alert for “ductal carcinoma in situ” to a surgical implant company? Maybe you do not get bulk mail, but the electronic equivalent: Banner ads! [insert tacky Flash or Shock banner ad here]
Rewarding indiscretion If some users are continually willing to trade privacy for higher tiers of access then users valuing privacy are penalized in the information economy. What are the consequences of these “benefits of indiscretion”? Does it influence the expected baseline information release? Is there trending downward pressure on privacy stewardship?
Educating users How do universities, institutions educate users? Will the analogies of e-commerce become apparent to them? Is that the best example? Are there others? How do you decide to release your CC info? Do you do business with every merchant? I do business with Powells, but not Amazon. I release information to PLoS, but not Springer.
Responsibility for education What if users do not care? Should we chaperone users? “... [E]veryone cares some about privacy but only a few care very much.” - RL “Bob” Morgan Shibboleth empowers users, but it also confronts them with new and potentially profound choices.
Is privacy an educational gift? Is this part of the educational gift of a university? Like many decisions young adults make for the first time, the university may be the context. Value your privacy. Choose when you wish to release it. Like water in a stream, it does not return. Let's carefully craft the “click through license” ack on each user's Shibboleth initialization.
Privacy policies Content providers seeking identity or profiling data will almost certainly be forced to form privacy policies. Amazon has formed one, so perhaps eventually will Elsevier. Who will govern or rate content providers’ allegiance to their stated policies? Will we see “EPIC-Approved” click-throughs? The NISO ERM standard describing electronic resources does not now incorporate the concept of “privacy hygiene rating”.
New forms of contract Can contracts be written to permit users to control their Shibboleth environment? Opt-ins for persistent targetID release? (Not opt-outs.) How richly do you texturize the contract landscape? Libraries have inadequate experience in defining contracts that categorize eligible populations for auth[X] decisions; consortial environments are particularly treacherous.
Privacy templates for licenses? Will libraries and other licensees attempt to impose privacy hygiene into the contracts they write? Will there be a Creative Commons for information release? Add “To prevent buying or selling of personal data.” Add “To permit identity release... automatically, without notification automatically, with notification manually, with notification.”...
Accidental or coerced privacy loss One can easily imagine situations in which univ. faculty accidentally force or encourage with prejudice a loss of privacy. Faculty might require access to upper level-tiers of content: “Read the author’s commentary on this article.” What about FERPA’s impact? Do we track and chaperone already opted-out students? State and Federal privacy laws might also be increasingly relevant to governance of conditions for attribute release.
Do hosts have rights to data? (What about us?) Do the institutional homes – e.g., universities - of user communities have a right to data generated by users? One our side, we probably must retain authentication and AR records for reasons of accountability and policy enforcement. Is there a Chinese wall? Is it recognized? Enforced? Audited?
Are the good, good? What is the university’s responsibility to its community in the context of federated authorization systems? Are community and individual rights published? Do users (faculty, students and staff) have a choice? Is that choice real? If you want to take this class, agree to these terms. If you want to obtain this degree, agree to these terms. Participating in an institution means certain reciprocal obligations – is one to particiapte with access licenses?
Dis-intermediated privacy What about n-tier applications? A system contacts another system on your behalf. User authorizes, but has no direct relationship with any of the [tier-2..n] of the n-tiers. What does the matrix of responsibilities look like? Is it comprehensible to both machine and user?
Robots and men There are many different costs in the world that we make. How should we seek to intervene and shape these issues? Some examples may already exist. An European Directive of 1995 declares that decisions adverse to a person must be reviewed by a human agent prior to being implemented. Are there “robot laws” for privacy?
Isaac Asimov, "Runaround” 1942 Asimov’s Three Laws of Robotics: First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm. Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.