Quantcast
Channel: Machine Learning | Towards AI
Viewing all articles
Browse latest Browse all 361

Fine-Tuning Legal-BERT: LLMs For Automated Legal Text Classification

$
0
0
Author(s): Drewgelbard Originally published on Towards AI. Unlocking efficient legal document classification with NLP fine-tuning Image Created by Author Introduction In today’s fast-paced legal industry, professionals are inundated with an ever-growing volume of complex documents — from intricate contract provisions and merger agreements to regulatory compliance records and court filings. Manually sifting through these documents is not only labor-intensive and time-consuming, but also prone to human error and inconsistency. This inefficiency can lead to overlooked risks, non-compliance with regulations, and ultimately, financial damage for organizations. The Challenge Legal texts are uniquely challenging for natural language processing (NLP) due to their specialized vocabulary, intricate syntax, and the critical importance of context. Terms that appear similar in general language can have vastly different meanings in legal contexts. Therefore, generic NLP models often fall short when applied directly to legal documents. The Solution This is where fine-tuning specialized language models comes into play. By adapting models that are pre-trained on legal corpora, we can achieve higher accuracy and reliability in tasks like contract analysis, compliance monitoring, and legal document retrieval. In this article, we will delve into how Legal-BERT [5], a transformer-based model tailored for legal texts, can be fine-tuned to classify contract provisions using the LEDGAR dataset [4] — a comprehensive benchmark dataset specifically designed for the legal field. What You’ll Learn By the end of this tutorial, you’ll have a complete roadmap for leveraging Legal-BERT to tackle legal text classification. Today will provide a guide on: Setting up your environment for NLP tasks involving legal documents. Understanding and preprocessing the LEDGAR dataset for optimal model performance. Performing exploratory data analysis to gain insights into the dataset’s structure. Fine-tuning Legal-BERT for multi-class classification of legal provisions. Evaluating the model’s performance against established benchmarks. Discussing challenges and considerations specific to legal NLP applications. Whether you’re a data scientist aiming to deepen your expertise in NLP or a machine learning engineer interested in domain-specific model fine-tuning, this tutorial will equip you with the tools and insights you need to get started. Table of Contents Environment Setup Dataset Overview Preprocessing and Tokenization Exploratory Data Analysis (EDA) Training and Fine-Tuning Evaluating the Model Conclusion and Key Takeaways Environment Setup We will use the Hugging Face Transformers library, which offers pre-trained models and tools to fine-tune them. While not strictly necessary, using a GPU will speed up training significantly. If you’re using Google Colab, enable the GPU by going to Runtime > Change runtime type and selecting GPU. First, install the necessary libraries: !pip install transformers datasets torch scikit-learn # Import necessary dependenciesimport torchfrom transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments, AutoModelForMaskedLMfrom datasets import load_dataset, DatasetDictimport numpy as npimport pandas as pdfrom sklearn.metrics import accuracy_score, f1_score, classification_reportfrom datasets import load_datasetfrom transformers import AutoTokenizerimport matplotlib.pyplot as pltfrom transformers import AutoModelForSequenceClassification, DataCollatorForLanguageModeling, Trainerfrom torch.utils.data import DataLoaderfrom sklearn.metrics import accuracy_score, f1_score, classification_report, precision_recall_curveimport seaborn as snsimport os # Set device for GPU usagedevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')print(f"Using device: {device}") Dataset Overview Image from LexGLUE Benchmark [1] The dataset chosen for this project is LEDGAR (Labeled EDGAR), part of the LexGLUE benchmark for legal language tasks [1, 3]. LEDGAR consists of contract provisions from publicly available SEC filings, also known as Exhibit 10 contracts, which are essential in legal fields. The dataset includes around 80,000 provisions labeled across 100 categories, from “Agreements” and “Confidentiality” to “Termination” and “Vesting” [3]. LEDGAR presents a challenging dataset for NLP models due to its diverse terminology and context-specific labels. The provisions are divided into training, validation, and test sets, with 60,000 provisions for training, 10,000 for validation, and 10,000 for testing. For this tutorial, we’ll download and prepare the dataset using Hugging Face’s datasets library. I recommend going to this link [4] to gain a better understanding of the dataset and LexGLUE benchmark. # Load LEDGAR datasetdataset = load_dataset('lex_glue', 'ledgar')# Display dataset featuresprint(dataset['train'].features) # Get label informationlabel_list = dataset['train'].features['label'].namesnum_labels = len(label_list)print(f"Number of labels: {num_labels}") Label Count An example of what some of the train data looks like is as follows [4]: { "text": "Executive agrees to be employed with the Company, and the Company agrees to employ Executive, during the Term and on the terms and conditions set forth in this Agreement. Executive agrees during the term of this Agreement to devote substantially all of Executive’s business time, efforts, skills and abilities to the performance of Executive’s duties ...", "label": "Employment" } Preprocessing and Tokenization To fine-tune Legal-BERT effectively, we need to prepare the LEDGAR dataset with several preprocessing steps: Mapping Labels to Indices: Create mappings between label names and indices to ensure compatibility with PyTorch during training. Token Length Computation: calculate the token lengths of each text example. This helps us understand the data distribution and ensure that the maximum sequence length (set to 512 tokens) is appropriate for the dataset. Tokenize Texts: Each provision is tokenized using Legal-BERT’s tokenizer, which is designed to handle legal terminology. Truncate and Pad Sequences: truncate texts longer than the maximum length and pad shorter ones, setting a max length of 512 tokens. This ensures consistent input lengths. # Create mappings from label names to indices and vice versalabel2id = {label: idx for idx, label in enumerate(label_list)}id2label = {idx: label for idx, label in enumerate(label_list)} # Initialize tokenizertokenizer = AutoTokenizer.from_pretrained('nlpaueb/legal-bert-base-uncased')# Token length computation functiondef compute_token_lengths(example): tokens = tokenizer.encode(example['text'], add_special_tokens=True) example['num_tokens'] = len(tokens) return example# Apply token length computation to the datasetdataset = dataset.map(compute_token_lengths)def preprocess_data(examples): # Tokenize the texts return tokenizer( examples['text'], truncation=True, # Truncate texts longer than max_length padding='max_length', # Pad texts shorter than max_length max_length=512 )# Apply the preprocessing function to the datasetencoded_dataset = dataset.map(preprocess_data, batched=True)# Set the format of the dataset to PyTorch tensorsencoded_dataset.set_format( type='torch', columns=['input_ids', 'attention_mask', 'label']) encoded_dataset Exploratory Data Analysis (EDA) EDA is an essential step in any machine learning workflow, especially when working with large and complex datasets like LEDGAR. By examining the data’s structure, distribution, and key characteristics, we can make informed decisions about preprocessing and model setup. Token Length Distribution Since our model (Legal-BERT) has a maximum input token […]

Viewing all articles
Browse latest Browse all 361

Trending Articles