When Copyright and Contracts Collide: Advocacy for Library and User Rights

A dramatic scene depicting a large copyright symbol exploding in a burst of energy, surrounded by flying pages and debris. The background features a stormy sky and a mountainous landscape.
AI-generated image via ChatGPT

In the ever-evolving landscape of digital access to scholarly research, libraries face new challenges as they navigate the intersection of copyright law and contractual agreements. Academic institutions increasingly rely on digital content, and understanding how copyright exceptions and contract law interact is crucial for protecting the rights of libraries and our users.

Tim Vollmer (Scholarly Communication & Copyright Librarian, UC Berkeley), Sara Benson (Copyright Librarian and Associate Professor, University of Illinois-Urbana Champaign), Jonathan Band (copyright attorney and counsel to the Library Copyright Alliance), and Jim Neal (University Librarian Emeritus, Columbia University) presented on these issues at the 2024 American Library Association Annual Conference in San Diego. Our panel was titled When Copyright and Contracts Collide: Advocacy for Library and User Rights.

The Role of Copyright Exceptions

Sara set the stage for our discussion by describing the importance of limitations and exceptions to copyright that empower libraries, research, and teaching. For example, Section 108 of the U.S. Copyright Act allows libraries and archives to make limited copies of copyrighted materials for preservation, replacement, fulfilling interlibrary loan requests, and more. Fair use—Section 107 of the Act—permits limited use of copyrighted works without having to seek the copyright holder’s permission when the use is for purposes such as teaching, research, scholarship, reporting, criticism, or parody. Faculty, students, and academic authors leverage fair use when they incorporate copyrighted materials for teaching, research, and publishing. And the fair use exception has played an increasingly important role in facilitating new types of scholarly research, including text and data mining.

The Threat of Contractual Override

Despite these protections, contractual agreements can sometimes override copyright exceptions. Vendor licensing terms may include clauses that restrict activities such as text and data mining. And even though fair use is a statutory right (meaning it’s in the law) in the U.S., and even though there have been court cases that confirm that activities such as text data mining falls under fair use, there is no protection against the practice where private parties such as academic publishers “contract around” fair use for actions that already are lawful.

As a result, academic libraries are forced to negotiate and often pay significant sums each year to try to preserve fair use rights for campus scholars through the database and electronic content license agreements that they sign.

Jonathan discussed alternative international approaches to the problem of contractual override. The European Union, for example, has implemented directives that nullify contract terms which override specific copyright exceptions. Countries like Australia, New Zealand, and Norway have also adopted similar measures. However, the United States and Canada lack comprehensive contract override prevention laws, making it challenging to protect copyright exceptions at the national level.

Advocating for Fair Contracts in Library Licensing

Tim discussed how academic libraries are demanding license agreements that preserve fair use rights. But at the same time, libraries are already starting to see contract amendments put forth by scholarly publishers that attempt to impose outright bans on any use of artificial intelligence (AI) tools for the content we’re licensing from them. The challenge is that we know that researchers are using library-licensed materials for many AI uses in the context of nonprofit scholarship and research, and these uses should be a fair use, just as it’s fair use for researchers to conduct text data mining on licensed resources.

Library workers can smartly negotiate to protect the rights of instructors, students, and other academic community members to use library-licensed resources in the ways they need to conduct their teaching and research while simultaneously taking into consideration the concerns of publishers.

Moving Forward: A Coordinated Approach

To address the issue of contractual override, Jim suggested several approaches, including educating library stakeholders such as administrators and faculty, building constructive relationships with publishers, monitoring international developments, and pursuing legislative change to protect copyright exceptions.

The University of California Libraries are already collaborating on this and related issues with our colleagues. After outreach to several library and faculty committees, the UC’s Academic Senate sent a letter to UC President Michael Drake to advocate that the UC Libraries need to be able to negotiate to preserve fair use rights when licensing electronic resources—including the rights to conduct computational research and utilize AI tools in academic studies and scholarship. President Drake and UC System Provost and Executive Vice President for Academic Affairs Katherine S. Newman affirmed this commitment.

Please reach out to schol-comm@berkeley.edu with any questions. For more information, please see the links below.


UC Berkeley Library to Copyright Office: Protect fair uses in AI training for research and education

Madison Building, Library of Congress
Copyright Matt H. Wade, licensed CC-BY-NC-SA 3.0

We are pleased to share the UC Berkeley Library’s response to the U.S. Copyright Office’s Notice of Inquiry regarding artificial intelligence and copyright. Our response addresses the essential fair use right relied upon by UC Berkeley scholars in undertaking groundbreaking research, and the need to preserve access to the underlying copyright-protected content so that scholars using AI systems can conduct research inquiries.

In this blog post, we explain what the Copyright Office is studying, and why it was important for the Library to make scholars’ voices heard.

What the Copyright Office is studying and why

Loosely speaking, the Copyright Office wants to understand how to set policy for copyright issues raised by artificial intelligence (“AI”) systems.

Over the last year, AI systems and the rapid growth of their capabilities have attracted significant attention. One type of AI, referred to as “generative AI”, is capable of producing outputs such as text, images, video, or audio (including emulating a human voice) that would be considered copyrightable if created by a human author. These systems include, for instance, the chatbot ChatGPT, and text-to-image generators like DALL·E, Midjourney, and Stable Diffusion. A user can prompt ChatGPT to write a short story that features a duck and a frog who are best friends, or prompt DALL·E to create an abstract image in the style of a Jackson Pollock painting. Generative AI systems are relevant to and impact many educational activities on a campus like UC Berkeley, but (at least to date) have not been the key facilitator of campus research methodologies. 

Instead, in the context of research, scholars have been relying on AI systems to support a set of research methodologies referred to as “text and data mining” (or TDM). TDM utilizes computational tools, algorithms, and automated techniques to extract revelatory information from large sets of unstructured or thinly-structured digital content. Imagine you have a book like “Pride and Prejudice.” There are nearly infinite volumes of information stored inside that book, depending on your scholarly inquiry, such as how many female vs. male characters there are, what types of words the female characters use as opposed to the male characters, what types of behaviors the female characters display relative to the males, etc. TDM allows researchers to identify and analyze patterns, trends, and relationships across volumes of data that would otherwise be impossible to sift through on a close examination of one book or item at a time. 

Not all TDM research methodologies necessitate the usage of AI systems to extract this information. For instance, as in the “Pride and Prejudice” example above, sometimes TDM can be performed by developing algorithms to detect the frequency of certain words within a corpus, or to parse sentiments based on the proximity of various words to each other. In other cases, though, scholars must employ machine learning techniques to train AI models before the models can make a variety of assessments. 

Here is an illustration of the distinction: Imagine a scholar wishes to assess the prevalence with which 20th century fiction authors write about notions of happiness. The scholar likely would compile a corpus of thousands or tens of thousands of works of fiction, and then run a search algorithm across the corpus to detect the occurrence or frequency of words like “happiness,” “joy,” “mirth,” “contentment,” and synonyms and variations thereof. But if a scholar instead wanted to establish the presence of fictional characters who embody or display characteristics of being happy, the scholar would need to employ discriminative modeling (a classification and regression technique) that can train AI to recognize the appearance of happiness by looking for recurring indicia of character psychology, behavior, attitude, conversational tone, demeanor, appearance, and more. This is not using a generative AI system to create new outputs, but rather training a non-generative AI system to predict or detect existing content. And to undertake this type of non-generative AI training, a scholar would need to use a large volume of often copyright-protected works.

The Copyright Office is studying both of these kinds of AI systems—that is, both generative AI and non-generative AI. They are asking a variety of questions in response to having been contacted by stakeholders across sectors and industries with diverse views about how AI systems should be regulated. Some of the concerns expressed by stakeholders include: 

  • Who is the “author” of generative AI outputs?
  • Should people whose voices or images are used to train generative AI systems have a say in how their voices or images are used? 
  • Should the creator of an AI system (whether generative or non-generative) need permission from copyright holders to use copyright-protected materials in training the AI to predict and detect things?
  • Should copyright owners get to opt out of having their content used to train AI? Should ethics be considered within copyright regulation?

Several of these questions are already the subject of pending litigation. While these questions are being explored by the courts, the Copyright Office wants to understand the entire landscape better as it considers what kinds of AI copyright regulations to enact.

The copyright law and policy landscape underpinning the use of AI models is complex, and whatever regulatory decisions that the Copyright Office makes will bear ramifications for global enterprise, innovation, and trade. The Copyright Office’s inquiry thus raises significant and timely legal questions, many of which we are only beginning to understand. 

For these reasons, the Library has taken a cautious and narrow approach in its response to the inquiry: we address only two key principles known about fair use and licensing, as these issues bear upon the nonprofit education, research, and scholarship undertaken by scholars who rely on (typically non-generative) AI models. In brief, the Library wants to ensure that (1) scholars’ voices, and that of the academic libraries who support them, are heard to preserve fair use in training AI, and that (2) copyright-protected content remains available for AI training to support nonprofit education and research.

Why the study matters for fair use

Previous court cases like Authors Guild v. HathiTrust, Authors Guild v. Google, and A.V. ex rel. Vanderhye v. iParadigms have addressed fair use in the context of TDM and determined that the reproduction of copyrighted works to create and text mine a collection of copyright-protected works is a fair use. These cases further hold that making derived data, results, abstractions, metadata, or analysis from the copyright-protected corpus available to the public is also fair use, as long as the research methodologies or data distribution processes do not re-express the underlying works to the public in a way that could supplant the market for the originals. Performing all of this work is essential for TDM-reliant research studies.

For the same reasons that the TDM process is fair use of copyrighted works, the training of AI tools to do that TDM should also be fair use, in large part because training does not reproduce or communicate the underlying copyrighted works to the public. Here, there is an important distinction to make between training inputs and outputs, in that the overall fair use of generative AI outputs cannot always be predicted in advance: The mechanics of generative models’ operations suggest that there are limited instances in which generative AI outputs could indeed be substantially similar to (and potentially infringing of) the underlying works used for training; this substantial similarity is possible typically only when a training corpus is rife with numerous copies of the same work. However, the training of AI models by using copyright-protected inputs falls squarely within what courts have determined to be a transformative fair use, especially when that training is for nonprofit educational or research purposes. And it is essential to protect the fair use rights of scholars and researchers to make these uses of copyright-protected works when training AI.

Further, were these fair use rights overridden by limiting AI training access to only “safe” materials (like public domain works or works for which training permission has been granted via license), this would exacerbate bias in the nature of research questions able to be studied and the methodologies available to study them, and amplify the views of an unrepresentative set of creators given the limited types of materials available with which to conduct the studies.

Why access to AI training content should be preserved

For the same reasons, it is important that scholars’ ability to access the underlying content to conduct AI training be preserved. The fair use provision of the Copyright Act does not afford copyright owners a right to opt out of allowing other people to use their works for good reason: if content creators were able to opt out, the provision for fair use would be undermined, and little content would be available to build upon for the advancement of science and the useful arts. Accordingly, to the extent that the Copyright Office is considering creating a regulatory right for creators to opt out of having their works included in AI training, it is paramount that such opt-out provision not be extended to any AI training or activities that constitute fair use, particularly in the nonprofit educational and research contexts.

AI training opt-outs would be a particular threat for research and education because fair use in these contexts is already becoming an out-of-reach luxury even for the wealthiest institutions. Academic libraries are forced to pay significant sums each year to try to preserve fair use rights for campus scholars through the database and electronic content license agreements that libraries sign. In the U.S., the prospect of “contractual override” means that, although fair use is statutorily provided for, private parties (like publishers) may “contract around” fair use by requiring libraries to negotiate for otherwise lawful activities (such as conducting TDM or training AI for research), and often to pay additional fees for the right to conduct these lawful activities on top of the cost of licensing the content, itself. When such costs are beyond institutional reach, the publisher or vendor may then offer similar contractual terms directly to research teams, who may feel obliged to agree in order to get access to the content they need. Vendors may charge tens or even hundreds of thousands of dollars for this type of access.

This “pay-to-play” landscape of charging institutions for the opportunity to rely on existing statutory rights is particularly detrimental for TDM research methodologies, because TDM research often requires use of massive datasets with works from many publishers, including copyright owners that cannot be identified or who are unwilling to grant licenses. If the Copyright Office were to enable rightsholders to opt-out of having their works fairly used for training AI, then academic institutions and scholars would face even greater hurdles in licensing content for research purposes. 

First, it would be operationally difficult for academic publishers and content aggregators to amass and license the “leftover” body of copyrighted works that remain eligible for AI training. Costs associated with publishers’ efforts in compiling “AI-training-eligible” content would be passed along as additional fees charged to academic libraries. In addition, rightsholders might opt out of allowing their work to be used for AI training fair uses, and then turn around and charge AI usage fees to scholars (or libraries)—essentially licensing back fair uses for research. These scenarios would impede scholarship by or for research teams who lack grant or institutional funds to cover these additional expenses; penalize research in or about underfunded disciplines or geographical regions; and result in bias as to the topics and regions studied. 

Scholars need to be able to utilize existing knowledge resources to create new knowledge goods. Congress and the Copyright Office clearly understand the importance of facilitating access and usage rights, having implemented the statutory fair use provision without any exclusions or opt-outs. This status quo should be preserved for fair use AI training—and particularly in the nonprofit educational or research contexts. 

Our office is here to help

No matter what happens with the Copyright Office’s inquiry and any regulations that ultimately may be established, the UCB Library’s Office of Scholarly Communication Services is here to help you. We are a team of copyright law and information policy (licensing, privacy, and ethics) experts who help UC Berkeley scholars navigate legal, ethical, and policy considerations in utilizing resources in their research and teaching. And we are national and international leaders in supporting TDM research—offering online tools, trainings, and individual consultations to support your scholarship. Please feel free to reach out to us with any questions at schol-comm@berkeley.edu


Making it easier to reuse and share Thérèse Bonney photography

Woman holding a child wrapped in a blanket at Parroquia Del Dulce Nombre de Maria in Donna Carlotta, Madrid
Spain: Parroquia Del Dulce Nombre de Maria in Doña Carlota, Madrid.
BANC PIC 1982.111.03.0287–NNEG
Thérèse Bonney, © The Regents of the University of California, The Bancroft Library, University of California, Berkeley. This work is made available under a Creative Commons Attribution 4.0 license.

As part of UC Berkeley Library’s trend-setting efforts to make all our collections easier to use, reuse, and publish from, we are excited to announce that: 

We’ve just eliminated hurdles to the reuse of renowned photographer Thérèse Bonney’s photographs. Every photograph ever taken by Bonney is licensed under a Creative Commons Attribution 4.0 license (CC BY 4.0). This means anyone around the world can incorporate Bonney’s photos into papers, projects, and productions—even commercial ones—without ever getting further permission or another license from us.

Thérèse Bonney Copyright

Thérèse Bonney (1894-1978) was a documentary photographer and war correspondent. She concentrated much of her work on documenting conditions in Europe during World War II. Prior to her work as a war correspondent, Bonney extensively photographed French architecture and design, as well as writers and artists such as Joan Miró, Fernand Léger, and Gertrude Stein. Her photographs have been exhibited at the Museum of Modern Art, the Library of Congress, and Carnegie Hall. 

Bonney transferred copyright to all of her work to the UC Regents to be managed by UC Berkeley Library. This includes Bonney materials at the UC Berkeley Library and any Bonney-authored or Bonney-created materials held by other institutions. 

Although people did not previously need the UC Regents’ permission (sometimes called a “license”) to make fair uses of Bonney’s because of the progressive permissions policy we created, prior to July 2022 people did need a license to reuse Bonney’s works if their intended use exceeded fair use. As a result, hundreds of book publishers, journals, and film-makers sought licenses from the Library each year to publish Bonney’s photos. 

The UC Berkeley Library recognized this as an unnecessary barrier for research and scholarship, and has now exercised its authority on behalf of the UC Regents to freely license Bonney’s entire corpus under CC-BY. This license is designed for maximum dissemination and use of the materials. 

How to use Bonney’s works going forward

Now that all Bonney photographs have a CC-BY license applied to them, no additional permission or license from the UC Regents or anyone else is needed to use Bonney’s work, even if you are using the work for commercial purposes. No fees will be charged, and no paperwork is necessary.

The CC-BY license does require attribution to the copyright owner, which in this case is the UC Regents. The Library suggests the following attribution:

Thérèse Bonney, © The Regents of the University of California, The Bancroft Library, University of California, Berkeley. This work is made available under a Creative Commons Attribution 4.0 license.

What’s ahead for the Library

The Library now has some work to do to make our catalog and other information sources about the Bonney photos reflect this application of the CC-BY licenses. This means we have to update things like the Bonney collection finding aid and the metadata for individual photos in the digital versions of the Bonney photos that we make available online. In the meantime, you can rely on written confirmation that we’ve applied the CC-BY license by consulting the Easy to Use Collections page of our permissions guide.

In the coming year, we hope to add many more collections to that Easy to Use Collections page, too. We’ll be spending some time reviewing materials for which the UC Regents own copyright, and seeing what we can “open up” with other CC BY licenses. Stay tuned.

— — — — —

This post was written by the Library’s Office of Scholarly Communication Services

 

 


UC Berkeley celebrates Love Data Week with great talks and tips!

Last week, the University Library, the Berkeley Institute for Data Science (BIDS), the Research Data Management program were delighted to host Love Data Week (LDW) 2018 at UC Berkeley. Love Data Week is a nationwide campaign designed to raise awareness about data visualization, management, sharing, and preservation. The theme of this year’s campaign was data stories to discuss how data is being used in meaningful ways to shape the world around us.

At UC Berkeley, we hosted a series of events designed to help researchers, data specialists, and librarians to better address and plan for research data needs. The events covered issues related to collecting, managing, publishing, and visualizing data. The audiences gained hands-on experience with using APIs, learned about resources that the campus provides for managing and publishing research data, and engaged in discussions around researchers’ data needs at different stages of their research process.

Participants from many campus groups (e.g., LBNL, CSS-IT) were eager to continue the stimulating conversation around data management. Check out the full program and information about the presented topics.

Photographs by Yasmin AlNoamany for the University Library and BIDS.

Eric Livingston explains the difference between Elsevier APIs.

LDW at UC Berkeley was kicked off by a walkthrough and demos about Scopus APIs (Application Programming Interface), was led by Eric Livingston of the publishing company, Elsevier. Elsevier provides a set of APIs that allow users to access the content of journals and books published by Elsevier.

In the first part of the session, Eric provided a quick introduction to APIs and an overview about Elsevier APIs. He illustrated the purposes of different APIs that Elsevier provides such as DirectScience APIs, SciVal API, Engineering Village API, Embase APIs, and Scopus APIs. As mentioned by Eric, anyone can get free access to Elsevier APIs, and the content published by Elsevier under Open Access licenses is fully available. Eric explained that Scopus APIs allow users to access curated abstracts and citation data from all scholarly journals indexed by Scopus, Elsevier’s abstract and citation database. He detailed multiple popular Scopus APIs such as Search API, Abstract Retrieval API, Citation Count API, Citation Overview API, and Serial Title API. Eric also overviewed the amount of data that Scopus database holds.

The attendees conduct live queries on Scopus APIs.

In the second half of the workshop, Eric explained how Scopus APIs work, how to get a key to Scopus APIs, and showed different authentication methods. He walked the group through live queries, showed them how to extract data from API and how to debug queries using the advanced search. He talked about the limitations of the APIs and provided tips and tricks for working with Scopus APIs.

 

Eric explains code snippets for querying Scopus APIs.

Eric left the attendances with actionable and workable code and scripts to pull and retrieve data from Scopus APIs.

The Data Stories and Visualization Panel.

On the second day, we hosted a Data Stories and Visualization Panel, featuring Claudia von Vacano (D-Lab), Garret S. Christensen (BIDS and BITSS), Orianna DeMasi (Computer Science and BIDS), and Rita Lucarelli (Department of Near Eastern Studies). The talks and discussions centered upon how data is being used in creative and compelling ways to tell stories, in addition to rewards and challenges of supporting groundbreaking research when the underlying research data is restricted.

Claudia von Vacano talks about the Online Hate Index project.

Claudia von Vacano, the Director of D-Lab, discussed the Online Hate Index (OHI), a joint initiative of the Anti-Defamation League’s (ADL) Center for Technology and Society that uses crowd-sourcing and machine learning to develop scalable detection of the growing amount of hate speech within social media. In its recently-completed initial phase, the project focused on training a model based on an unbiased dataset collected from Reddit. Claudia explained the process, from identifying the problem, defining hate speech, and establishing rules for human coding, through building, training, and deploying the machine learning model. Going forward, the project team plans to improve the accuracy of the model and extend it to include other social media platforms.

Garret S. Christensen talks about his experience with research data.

Next, Garret S. Christensen, BIDS and BITSS fellow, talked about his experience with research data. He started by providing a background about his research, then discussed the challenges he faced in collecting his research data. The main research questions that Garret investigated are: How are people responding to military deaths? Do large numbers of, or high-profile, deaths affect people’s decision to enlist in the military?

Garret shows the relation between US deaths rate and the number of applicants to the military.

Garret discussed the challenges of obtaining and working with the Department of Defense data obtained through a Freedom of Information Act request for the purpose of researching war deaths and military recruitment. Despite all the challenges that Garret faced and the time he spent on getting the data, he succeeded in putting the data together into a public repository. Now the information on deaths in the US Military from January 1, 1990 to November 11, 2010 that was obtained through Freedom of Information Act request is available on dataverse. At the end, Garret showed that how deaths and recruits have a negative relationship.

Orianna DeMasi related her experience of working on real problems with real human data to taming dragons.

Orianna DeMasi, a graduate student of Computer Science and BIDS Fellow, shared her story of working with human subjects data. The focus of Orianna’s research is on building tools to improve mental healthcare. Orianna framed her story about collecting and working with human subject data as a fairy tale story. She indicated that working with human data makes security and privacy essential. She has learned that it’s easy to get blocked “waiting for data” rather than advancing the project in parallel to collecting or accessing data. At the end, Orianna advised the attendees that “we need to keep our eyes on the big problems and data is only the start.”

Rita Lucarelli is discussing the Book of the Dead in 3D.

Rita Lucarelli, Department of Near Eastern Studies discussed the Book of the Dead in 3D project, which shows how photogrammetry can help visualization and study of different sets of data within their own physical context. According to Rita, the “Book of the Dead in 3D” project aims in particular to create a database of “annotated” models of the ancient Egyptian coffins of the Hearst Museum, which is radically changing the scholarly approach and study of these inscribed objects, at the same time posing a challenge in relation to data sharing and the publication of the artifacts. Rita indicated that metadata is growing and digital data and digitization are challenging.

It was fascinating to hear about Egyptology and how to visualize 3D ancient objects!

Daniella Lowenberg presents on Research Data Management Planning and Publishing.

We closed out LDW 2018 at UC Berkeley with a session about Research Data Management Planning and Publishing. In the session, Daniella Lowenberg (University of California Curation Center) started by discussing the reasons to manage, publish, and share research data on both practical and theoretical levels.

Daniella shares tips about publishing research data.

Daniella shared practical tips about why, where, and how to manage research data and prepare it for publishing. She discussed relevant data repositories that UC Berkeley and other entities offer. Daniela also illustrated how to make data reusable, and highlighted the importance of citing research data and how this maximizes the benefit of research.

Daniella’s live demo on using Dash for publishing research data.

At the end, Daniella presented a live demo on using Dash for publishing research data and encouraged UC Berkeley workshop participants to contact her with any question about data publishing. In a lively debate, researchers shared their experiences with Daniella about working with managing research data and highlighted what has worked and what has proved difficult.

We have received overwhelmingly positive feedback from the attendees. Attendees also expressed their interest in having similar workshops to understand the broader perspectives and skills needed to help researchers manage their data.

I would like to thank BIDS and the University Library for sponsoring the events.

Yasmin AlNoamany

 


Great talks and tips in Love Your Data Week 2017

This week, the University Library and the Research Data Management program were delighted to participate in the Love Your Data (LYD) Week campaign by hosting a series of workshops designed to help researchers, data specialists, and librarians to better address and plan for research data needs. The workshops covered issues related to managing, securing, publishing, and licensing data. Participants from many campus groups (e.g., LBNL, CSS-IT) were eager to continue the stimulating conversation around data management. Check out the full program and information about the presented topics.

Photographs by Yasmin AlNoamany for the University Library.

The Securing Research Data Panel.

The first day of LYD week at UC Berkeley was kicked off by a discussion panel about Securing Research Data, featuring
Jon Stiles (D-Lab, Federal Statistical RDC), Jesse Rothstein (Public Policy and Economics, IRLE), Carl Mason (Demography). The discussion centered upon the rewards and challenges of supporting groundbreaking research when the underlying research data is sensitive or restricted. In a lively debate, various social science researchers detailed their experiences working with sensitive research data and highlighted what has worked and what has proved difficult.

Chris Hoffman illustrated Securing Research Data – A campus-wide project.

At the end, Chris Hoffman, the Program Director of the Research Data Management program, described a campus-wide project about Securing Research Data. Hoffman said the goals of the project are to improve guidance for researchers, benchmark other institutions’ services, and assess the demand and make recommendations to campus. Hoffman asked the attendees for their input about the services that the campus provides.

The attendees of Securing Research Data workshop ask questions about data protection.
Rick Jaffe and Anna Sackmann in the RDM Tools and Tips: Box and Drive workshop.

On the second day, we hosted a workshop about the best practices for using Box and bDrive to manage documents, files and other digital assets by Rick Jaffe (Research IT) and Anna Sackmann (UC Berkeley Library). The workshop covered multiple issues about using Box and bDrive such as the key characteristics, and personal and collaborative use features and tools (including control permissions, special purpose accounts, pushing and retrieving files, and more). The workshop also covered the difference between the commercial and campus (enterprise) versions of Box and Drive. Check out the RDM Tools and Tips: Box and Drive presentation.

Anna and Rick ask attendees to do a group activity to get them talking about their workflow.

We closed out LYD Week 2017 at UC Berkeley with a workshop about Research Data Publishing and Licensing 101. In the workshop, Anna Sackmann and Rachael Samberg (UC Berkeley’s Scholarly Communication Officer) shared practical tips about why, where, and how to publish and license your research data. (You can also read Samberg & Sackmann’s related blog post about research data publishing and licensing.)

Anna Sackmann talks about publishing research data at UC Berkeley.

In the first part of the workshop, Anna Sackmann talked about reasons to publish and share research data on both practical and theoretical levels. She discussed relevant data repositories that UC Berkeley and other entities offer, and provided criteria for selecting a repository. Check out Anna Sackmann’s presentation about Data Publishing.

Anna Sackmann differentiates between different repositories in UC Berkeley.
Rachael Samberg, UC Berkeley’s Scholarly Communication Officer.

During the second part of the presentation, Rachael Samberg illustrated the importance of licensing data for reuse and how the agreements researchers enter into and copyright affects licensing rights and choices. She also distinguished between data attribution and licensing. Samberg mentioned that data licensing helps resolve ambiguity about permissions to use data sets and incentivizes others to reuse and cite data. At the end, Samberg explained how people can license their data and advised UC Berkeley workshop participants to contact her with any questions about data licensing.

Rachael Samberg explains the difference between attribution and license.
Rachael Samberg explains the difference between attribution and license.

Check out the slides from Rachael Samberg’s presentation about data licensing below.

 

The workshops received positive feedback from the attendees. Attendees also expressed their interest in having similar workshops to understand the broader perspectives and skills needed to help researchers manage their data.


Yasmin AlNoamany

Special thanks to Rachael Samberg for editing this post.


Working Group – Licensing

Margaret Phillips will lead a hardy band of selectors — Jim Church, Dana Jemison, Mary Ann Mahoney and Susan Xue — through a review of the elements we and CDL look for now when negotiating with publishers to license online resources. They will also recommend further elements they think advisable. Their product will be a vetted checklist posted where selectors can find it.

This charge is in response to the Digital Collection Development Plan Task Force Final Report

Principle No. 7: Selection/Licensing — Prospective: When acquiring licensed e-content, the license should meet current standards…

addressing specifically actions 26-29 and 32.