Skip navigation
Use este identificador para citar ou linkar para este item: https://repositorio.ufpe.br/handle/123456789/58332

Compartilhe esta página

Título: Evaluation of Large Language Models in Contract Information Extraction
Autor(es): SILVA, Weybson Alves da
Palavras-chave: Information Extraction; Contract Review; Large Language Models; Natural Language Processing
Data do documento: 9-Out-2024
Citação: SILVA, Weybson Alves da. Evaluation of Large Language Models in Contract Information Extraction. 2024. Trabalho de Conclusão de Curso (Ciência da Computação) – Universidade Federal de Pernambuco, Recife, 2024.
Abstract: Despite the rapid advancement of Large Language Models (LLMs), there is limited research focused on their effectiveness in extracting specific information from contracts. This study evaluates the effectiveness of state-of-the-art models—GPT-3.5-Turbo, Gemini-1.5-Pro, Claude-3.5-Sonnet, and Llama-3-70B-Instruct—in extracting key clauses from contracts using the Contract Understanding Atticus Dataset (CUAD). We explore the impact of prompting strategies and input context configurations across two scenarios: one covering all 41 clause categories and another focusing on a subset of three. Our findings reveal that LLMs can extract contract information efficiently, outperforming traditional human review in terms of time and cost. Performance, however, varies significantly depending on context size and task specificity, with reduced context approaches and focused extractions often improving recall at the expense of precision. Notably, Claude-3.5-Sonnet, with zero-shot with output example and reduced context, achieved a recall of 0.77 and precision of 0.66, surpassing prior benchmarks on full-category extraction. However, performance is inconsistent across clause types. Models like Llama-3-70B-Instruct, while less robust, demonstrated strong performance on simpler tasks, highlighting their potential in targeted use cases. Additionally, retrieval-augmented generation shows potential for improving extraction and efficiency in long documents, though its performance is constrained by retriever accuracy. Our experiments suggest that with further refinement, LLMs could be vital in automating complex legal tasks, particularly in efficiently handling dense legal texts such as contracts.
URI: https://repositorio.ufpe.br/handle/123456789/58332
Aparece nas coleções:(TCC) - Ciência da Computação

Arquivos associados a este item:
Arquivo Descrição TamanhoFormato 
TCC Weybson Alves da Silva.pdf3,15 MBAdobe PDFThumbnail
Visualizar/Abrir


Este arquivo é protegido por direitos autorais



Este item está licenciada sob uma Licença Creative Commons Creative Commons