Κυριακή 17 Νοεμβρίου 2019

Simulation and Architecture: Mapping Building Information Modeling

Abstract

In the 1990s, Building Information Modeling (BIM) software significantly altered architectural approaches to planning and building. Based on parametric methods, BIM technologies sought to simulate the construction process prior to a building’s realisation. These computer simulations challenged the existing practice of representing a building through plan, section and elevation, proposing that one computational model could create a more efficient way of building. The history of BIM explorations and applications, while hardly linear, can be traced back to developments in computing since the post-war period. This article maps some of these histories by examining how the computational model became an organisational infrastructure, collecting data about design and building parameters, and facilitating knowledge transfer across industries. Special attention will be given to the foundational role of Charles Eastman’s work on a Building Description System (BDS) in the 1970s, as well as Robert Aish’s contribution to RUCAPS, one of the earliest applications of Building Modeling for the design of parametric structures. I will further address research on interface technologies and computational curve modelling as well as the popularisation of Building Information systems through the office of Gehry Partners. By highlighting the interrelated nature of technology and cultural shifts in the making of BIM, this contribution sheds light on the epistemic status of computer simulations in architecture, and the dynamics of the design and building processes in which they are used.

Computer Simulations Then and Now: an Introduction and Historical Reassessment

Implicit Changes of Model Uses in Astrophysics, Illustrated on the Paris-Durham Shock Model

Abstract

This paper explores the epistemic status of models and simulations between theory, on the one hand, and observations, on the other. In particular, I will argue that the interpretation of an essentially invariant astrophysical model structure can change substantially over time. I will illustrate this claim using as an example the first 20 years (1985–2004) of development of the Paris-Durham shock code—a numerical model of slow interstellar shock waves (i.e. a disturbance of the medium that travel faster than the local speed of sound). I will show that the model’s interpretation and, in particular, its underlying representational ideal—the modeler’s (often implicit) goal governing the development and the use of the model—changed notably during this period. Whereas the code was originally used in a purely exploratory fashion, it was later taken to represent and encompass the target phenomenon as completely as possible. It is noteworthy that during this transition the model’s change of epistemic status was never explicitly acknowledged or in any way articulated. However, the impetus for the change can, I claim, be found in the role that observational data came to play in the later publications.

Program FAKE: Monte Carlo Event Generators as Tools of Theory in Early High Energy Physics

Abstract

The term Monte Carlo method indicates any computer-aided procedure for numerical estimation that combines mathematical calculations with randomly generated numerical input values. Today it is an important tool in high energy physics while physicists and philosophers also often consider it a sort of virtual experiment. The Monte Carlo method was developed in the 1940s, in the context of U.S. American nuclear weapons research, an event often regarded as the origin of both computer simulation and “artificial reality” (Galison 1997). The present paper interrogates this strong claim by focusing on the emergence of Monte Carlo event generators in particle physics in the early 1960s. This historical case study shows how, as Monte Carlo computation became part of the toolbox of particle physicists around 1960, it was neither usually referred to as a “computer simulation” nor was it regarded as a surrogate for experimentation. In revising the history of this method, this paper asks, in what context did particle physicists of the 1960s decide to create FAKE, the first high-energy-physics Monte Carlo event simulator? What was their goal? And what epistemic role did FAKE play? In answering these questions, it is argued that Monte Carlo computations were not introduced into particle physics to simulate experiments, but rather they played the role of theoretical tools. The Monte Carlo method was able to do this thanks to its random component, a property which provided a means of modeling a specific phenomenon, so-called “(particle) resonances”. Indeed, in doing so, event generators even came to mutually assimilate and reshape the notions of particle and resonance, taking up an epistemic function which had previously been confined to physical-mathematical formulae: that of a medium which could express aspects of particle theory.

‘A Pretence of What is Not’? A Study of Simulation(s) from the ENIAC Perspective

Abstract

What is the significance of high-speed computation for the sciences? How far does it result in a practice of simulation which affects the sciences on a very basic level? To offer more historical context to these recurring questions, this paper revisits the roots of computer simulation in the development of the ENIAC computer and the Monte Carlo method.
With the aim of identifying more clearly what really changed (or not) in the history of science in the 1940s and 1950s due to the computer, I will emphasize the continuities with older practices and develop a two-fold argument. Firstly, one can find a diversity of practices around ENIAC which tends to be ignored if one focuses only on the ENIAC itself as the originator of Monte Carlo simulation. Following from this, I claim, secondly, that there was no simulation around ENIAC. Not only is the term ‘simulation’ not used within that context, but the analysis also shows how ‘simulation’ is an effect of three interrelated sets of different practices around the machine: (1) the mathematics which the ENIAC users employed and developed, (2) the programs, (3) the physicality of the machine. I conclude that, in the context discussed, the most important shifts in practice are about rethinking existing computational methods. This was done in view of adapting them to the high-speed and programmability of the new machine. Simulation then is but one facet of this process of adaptation, singled out by posterity to be viewed as its principal aspect.

Einleitung

Rezension/Review

Sammelbesprechung

Sammelbesprechung

„Zusammenwirken“ oder „Wettstreit der Nationen“

Zusammenfassung

Die Erforschung der Antarktis galt um 1900 als eine der letzten großen Herausforderungen im Zuge der Erschließung der Welt. Viele Nationen beteiligten sich daran, darunter das deutsche Kaiserreich. So fanden im Jahrzehnt vor dem Ersten Weltkrieg auch zwei deutsche Antarktisexpeditionen statt: von 1901 bis 1903 unter der Leitung von Erich von Drygalski und in den Jahren 1911/12 unter der Leitung Wilhelm Filchners. Die Forschung hat das Verhältnis zwischen den Unternehmen der verschiedenen Nationen bislang oftmals mit einem Fokus entweder auf Wettbewerb oder Zusammenarbeit beschrieben. Dieser Aufsatz zeigt, dass internationale Kooperation und Konkurrenz die deutsche Antarktisexploration in spannungsreicher Gleichzeitigkeit prägten. Dabei nutzten die Akteure von Beginn an beide Interaktionsmodi als argumentative Ressourcen und verfolgten bewusst und situativ variabel Handlungsoptionen nach sowohl kompetitiven als auch kooperativen Logiken. Die Akteure bemühten sich darum, keinen der beiden Modi überwiegen zu lassen, sondern ein Gleichgewicht zu wahren, um die sowohl aus Wettbewerb als auch aus Zusammenarbeit entstehenden Vorteile für sich nutzen zu können. Gerade die genauen Kriterien der Konkurrenz wurden dabei zum Teil in der Schwebe gehalten, um sich hier alle Möglichkeiten offen zu halten. Bei der retrospektiven Bewertung ihrer Expeditionen durch die deutsche Öffentlichkeit gelang es ihnen jedoch nicht, die Deutungshoheit über Erfolg und Misserfolg der Unternehmen zu gewinnen: Weder der Verweis auf erfolgreiche Kooperationen, noch auf den Sieg im Wettbewerb um wissenschaftliche Leistungen konnte in der öffentlichen Wahrnehmung im deutschen Kaiserreich die Niederlage beim Erreichen möglichst südlicher Breiten ausgleichen.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου