Κυριακή 25 Αυγούστου 2019

Vertical precedents in formal models of precedential constraint

Abstract

The standard model of precedential constraint holds that a court is equally free to modify a precedent of its own and a precedent of a superior court—overruling aside, it does not differentiate horizontal and vertical precedents. This paper shows that no model can capture the U.S. doctrine of precedent without making that distinction. A precise model is then developed that does just that. This requires situating precedent cases in a formal representation of a hierarchical legal structure, and adjusting the constraint that a precedent imposes based on the relationship of the precedent court and the instant court. The paper closes with suggestions for further improvements of the model.

Reasoning with dimensions and magnitudes

Abstract

This paper shows how two models of precedential constraint can be broadened to include legal information represented through dimensions. I begin by describing a standard representation of legal cases based on boolean factors alone, and then reviewing two models of constraint developed within this standard setting. The first is the “result model”, supporting only a fortiori reasoning. The second is the “reason model”, supporting a richer notion of constraint, since it allows the reasons behind a court’s decisions to be taken into account. I then show how the initial representation can be modified to incorporate dimensional information and how the result and reason models can be adapted to this new dimensional setting. As it turns out, these two models of constraint, which are distinct in the standard setting, coincide once they are transposed to the new dimensional setting, yielding exactly the same patterns of constraint. I therefore explore two ways of refining the reason model of constraint so that, even in the dimensional setting, it can still be separated from the result model.

Interactive virtue and vice in systems of arguments: a logocratic analysis

Abstract

The Logocratic Method, and the Logocratic theory that underwrites it, provide a philosophical explanation of three purposes or goals that arguers have for their arguments: to make arguments that are internally strong (the premises follow from the conclusions, to a greater or lesser degree—greatest degree in valid deductive arguments), or that are dialectically strong (win in some forum of argument competition, as for example in litigation contests of plaintiffs or prosecutors on the one hand, and defendants, on the other), or that are rhetorically strong (effective at persuading a targeted audience). This article presents the basic terms and methods of Logocratic analysis and then uses a case study to illustrate the Logocratic explanation of arguments. Highlights of this explanation are: the use of a (non-moral) virtue (and vice) framework to explicate the three strengths and weaknesses of arguments that are of greatest interest to arguers in many contexts (including but not limited to the context of legal argument), the Logocratic explication of the structure of abduction generally and of legal abduction specifically, the concept of a system of arguments, and the concept of the dynamic interactive virtue (and vice) of arguments—a property of systems of arguments in which the system of arguments as a whole (for example, the set of several arguments typically offered by a plaintiff or by a defendant) is as virtuous (or vicious) as are the component arguments that comprise the system. This is especially important since, according to Logocratic theory (and as illustrated in detail in this paper), some arguments, such as abduction and analogical argument, are themselves comprised of different logical forms (for example, abduction always plays a role within analogical argument, and either deduction or defeasible modus ponens, always plays a role within legal abduction).

Appellate Court Modifications Extraction for Portuguese

Abstract

Appellate Court Modifications Extraction consists of, given an Appellate Court decision, identifying the proposed modifications by the upper Court of the lower Court judge’s decision. In this work, we propose a system to extract Appellate Court Modifications for Portuguese. Information extraction for legal texts has been previously addressed using different techniques and for several languages. Our proposal differs from previous work in two ways: (1)  our corpus is composed of Brazilian Appellate Court decisions, in which we look for a set of modifications provided by the Court; and (2) to automatically extract the modifications, we use a traditional Machine Learning approach and a Deep Learning approach, both as alternative solutions and as a combined solution. We tackle the Appellate Court Modifications Extraction task, experimenting with a wide variety of methods. In order to train and evaluate the system, we have built the KauaneJunior corpus, using public data disclosed by the Appellate State Court of Rio de Janeiro jurisprudence database. Our best method, which is a Bidirectional Long Short-Term Memory network combined with Conditional Random Fields, obtained an \(F_{\beta = 1}\) score of 94.79%.

Using machine learning to predict decisions of the European Court of Human Rights

Abstract

When courts started publishing judgements, big data analysis (i.e. large-scale statistical analysis of case law and machine learning) within the legal domain became possible. By taking data from the European Court of Human Rights as an example, we investigate how natural language processing tools can be used to analyse texts of the court proceedings in order to automatically predict (future) judicial decisions. With an average accuracy of 75% in predicting the violation of 9 articles of the European Convention on Human Rights our (relatively simple) approach highlights the potential of machine learning approaches in the legal domain. We show, however, that predicting decisions for future cases based on the cases from the past negatively impacts performance (average accuracy range from 58 to 68%). Furthermore, we demonstrate that we can achieve a relatively high classification performance (average accuracy of 65%) when predicting outcomes based only on the surnames of the judges that try the case.

Evidence & decision making in the law: theoretical, computational and empirical approaches

Taking stock of legal ontologies: a feature-based comparative analysis

Abstract

Ontologies represent the standard way to model the knowledge about specific domains. This holds also for the legal domain where several ontologies have been put forward to model specific kinds of legal knowledge. Both for standard users and for law scholars, it is often difficult to have an overall view on the existing alternatives, their main features and their interlinking with the other ontologies. To answer this need, in this paper, we address an analysis of the state-of-the-art in legal ontologies and we characterise them along with some distinctive features. This paper aims to guide generic users and law experts in selecting the legal ontology that better fits their needs and in understanding its specificity so that proper extensions to the selected model could be investigated.

Introduction for artificial intelligence and law: special issue “natural language processing for legal texts”

Semi-automatic knowledge population in a legal document management system

Abstract

Every organization has to deal with operational risks, arising from the execution of a company’s primary business functions. In this paper, we describe a legal knowledge management system which helps users understand the meaning of legislative text and the relationship between norms. While much of the knowledge requires the input of legal experts, we focus in this article on NLP applications that semi-automate essential time-consuming and lower-skill tasks—classifying legal documents, identifying cross-references and legislative amendments, linking legal terms to the most relevant definitions, and extracting key elements of legal provisions to facilitate clarity and advanced search options. The use of Natural Language Processing tools to semi-automate such tasks makes the proposal a realistic commercial prospect as it helps keep costs down while allowing greater coverage.

Building a corpus of legal argumentation in Japanese judgement documents: towards structure-based summarisation

Abstract

We present an annotation scheme describing the argument structure of judgement documents, a central construct in Japanese law. To support the final goal of this work, namely summarisation aimed at the legal professions, we have designed blueprint models of summaries of various granularities, and our annotation model in turn is fitted around the information needed for the summaries. In this paper we report results of a manual annotation study, showing that the annotation is stable. The annotated corpus we created contains 89 documents (37,673 sentences; 2,528,604 characters). We also designed and implemented the first two stages of an algorithm for the automatic extraction of argument structure, and present evaluation results.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου