Κυριακή 10 Νοεμβρίου 2019

An axiomatic characterization of temporalised belief revision in the law

Abstract

This paper presents a belief revision operator that considers time intervals for modelling norm change in the law. This approach relates techniques from belief revision formalisms and time intervals with temporalised rules for legal systems. Our goal is to formalise a temporalised belief base and corresponding timed derivation, together with a proper revision operator. This operator may remove rules when needed or adapt intervals of time when contradictory norms are added in the system. For the operator, both constructive definition and an axiomatic characterisation by representation theorems are given.

When expert opinion evidence goes wrong

Abstract

This paper combines three computational argumentation systems to model the sequence of argumentation in a famous murder trial and the appeal procedure that followed. The paper shows how the argumentation scheme for argument from expert opinion can be built into a testing procedure whereby an argument graph is used to interpret, analyze and evaluate evidence-based natural language argumentation of the kind found in a trial. It is shown how a computational argumentation system can do this by combining argument schemes with argumentation graphs. Frighteningly, it is also shown by this example that when there are potentially confusing conflicting arguments from expert opinion, a jury can only too easily accept a conclusion prematurely before considering critical questions that need to be asked.

Modelling competing legal arguments using Bayesian model comparison and averaging

Abstract

Bayesian models of legal arguments generally aim to produce a single integrated model, combining each of the legal arguments under consideration. This combined approach implicitly assumes that variables and their relationships can be represented without any contradiction or misalignment, and in a way that makes sense with respect to the competing argument narratives. This paper describes a novel approach to compare and ‘average’ Bayesian models of legal arguments that have been built independently and with no attempt to make them consistent in terms of variables, causal assumptions or parameterization. The approach involves assessing whether competing models of legal arguments are explained or predict facts uncovered before or during the trial process. Those models that are more heavily disconfirmed by the facts are given lower weight, as model plausibility measures, in the Bayesian model comparison and averaging framework adopted. In this way a plurality of arguments is allowed yet a single judgement based on all arguments is possible and rational.

Legal and ethical implications of applications based on agreement technologies: the case of auction-based road intersections

Abstract

Agreement technologies refer to a novel paradigm for the construction of distributed intelligent systems, where autonomous software agents negotiate to reach agreements on behalf of their human users. Smart Cities are a key application domain for agreement technologies. While several proofs of concept and prototypes exist, such systems are still far from ready for being deployed in the real-world. In this paper we focus on a novel method for managing elements of smart road infrastructures of the future, namely the case of auction-based road intersections. We show that, even though the key technological elements for such methods are already available, there are multiple non-technical issues that need to be tackled before they can be applied in practice. For this purpose, we analyse legal and ethical implications of auction-based road intersections in the context of international regulations and from the standpoint of the Spanish legislation. From this exercise, we extract a set of required modifications, of both technical and legal nature, which need to be addressed so as to pave the way for the potential real-world deployment of such systems in a future that may not be too far away.

Is hybrid formal theory of arguments, stories and criminal evidence well suited for negative causation?

Abstract

In this paper, I have two primary goals. First, I show that the causal-based story approach in A hybrid formal theory of arguments, stories and criminal evidence (or Hybrid Theory, for short) is ill suited to negative (or absence) causation. In the literature, the causal-based approach requires that hypothetical stories be causally linked to the explanandum. Many take these links to denote physical or psychological causation, or temporal precedence. However, understanding causality in those terms, as I will show, cannot capture cases of negative causation, which are of interest to the Law. In keeping with this, I also discuss some of the difficulties Hybrid Theory invites by remaining silent on the nature of the causal links. In my second aim, I sketch a way for Hybrid Theory to overcome this problem. By replacing the original, underlying causal structure with contrastive causation in the law, Hybrid Theory can represent reasoning in which the evidence that is appealed to is causally linked via negative causation to the explananda.

Vertical precedents in formal models of precedential constraint

Abstract

The standard model of precedential constraint holds that a court is equally free to modify a precedent of its own and a precedent of a superior court—overruling aside, it does not differentiate horizontal and vertical precedents. This paper shows that no model can capture the U.S. doctrine of precedent without making that distinction. A precise model is then developed that does just that. This requires situating precedent cases in a formal representation of a hierarchical legal structure, and adjusting the constraint that a precedent imposes based on the relationship of the precedent court and the instant court. The paper closes with suggestions for further improvements of the model.

Reasoning with dimensions and magnitudes

Abstract

This paper shows how two models of precedential constraint can be broadened to include legal information represented through dimensions. I begin by describing a standard representation of legal cases based on boolean factors alone, and then reviewing two models of constraint developed within this standard setting. The first is the “result model”, supporting only a fortiori reasoning. The second is the “reason model”, supporting a richer notion of constraint, since it allows the reasons behind a court’s decisions to be taken into account. I then show how the initial representation can be modified to incorporate dimensional information and how the result and reason models can be adapted to this new dimensional setting. As it turns out, these two models of constraint, which are distinct in the standard setting, coincide once they are transposed to the new dimensional setting, yielding exactly the same patterns of constraint. I therefore explore two ways of refining the reason model of constraint so that, even in the dimensional setting, it can still be separated from the result model.

Interactive virtue and vice in systems of arguments: a logocratic analysis

Abstract

The Logocratic Method, and the Logocratic theory that underwrites it, provide a philosophical explanation of three purposes or goals that arguers have for their arguments: to make arguments that are internally strong (the premises follow from the conclusions, to a greater or lesser degree—greatest degree in valid deductive arguments), or that are dialectically strong (win in some forum of argument competition, as for example in litigation contests of plaintiffs or prosecutors on the one hand, and defendants, on the other), or that are rhetorically strong (effective at persuading a targeted audience). This article presents the basic terms and methods of Logocratic analysis and then uses a case study to illustrate the Logocratic explanation of arguments. Highlights of this explanation are: the use of a (non-moral) virtue (and vice) framework to explicate the three strengths and weaknesses of arguments that are of greatest interest to arguers in many contexts (including but not limited to the context of legal argument), the Logocratic explication of the structure of abduction generally and of legal abduction specifically, the concept of a system of arguments, and the concept of the dynamic interactive virtue (and vice) of arguments—a property of systems of arguments in which the system of arguments as a whole (for example, the set of several arguments typically offered by a plaintiff or by a defendant) is as virtuous (or vicious) as are the component arguments that comprise the system. This is especially important since, according to Logocratic theory (and as illustrated in detail in this paper), some arguments, such as abduction and analogical argument, are themselves comprised of different logical forms (for example, abduction always plays a role within analogical argument, and either deduction or defeasible modus ponens, always plays a role within legal abduction).

Appellate Court Modifications Extraction for Portuguese

Abstract

Appellate Court Modifications Extraction consists of, given an Appellate Court decision, identifying the proposed modifications by the upper Court of the lower Court judge’s decision. In this work, we propose a system to extract Appellate Court Modifications for Portuguese. Information extraction for legal texts has been previously addressed using different techniques and for several languages. Our proposal differs from previous work in two ways: (1)  our corpus is composed of Brazilian Appellate Court decisions, in which we look for a set of modifications provided by the Court; and (2) to automatically extract the modifications, we use a traditional Machine Learning approach and a Deep Learning approach, both as alternative solutions and as a combined solution. We tackle the Appellate Court Modifications Extraction task, experimenting with a wide variety of methods. In order to train and evaluate the system, we have built the KauaneJunior corpus, using public data disclosed by the Appellate State Court of Rio de Janeiro jurisprudence database. Our best method, which is a Bidirectional Long Short-Term Memory network combined with Conditional Random Fields, obtained an \(F_{\beta = 1}\) score of 94.79%.

Using machine learning to predict decisions of the European Court of Human Rights

Abstract

When courts started publishing judgements, big data analysis (i.e. large-scale statistical analysis of case law and machine learning) within the legal domain became possible. By taking data from the European Court of Human Rights as an example, we investigate how natural language processing tools can be used to analyse texts of the court proceedings in order to automatically predict (future) judicial decisions. With an average accuracy of 75% in predicting the violation of 9 articles of the European Convention on Human Rights our (relatively simple) approach highlights the potential of machine learning approaches in the legal domain. We show, however, that predicting decisions for future cases based on the cases from the past negatively impacts performance (average accuracy range from 58 to 68%). Furthermore, we demonstrate that we can achieve a relatively high classification performance (average accuracy of 65%) when predicting outcomes based only on the surnames of the judges that try the case.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου