Search results

11 records were found.

A detailed and accurate description of vibrations of molecules and chemical reactions in the field of physical chemistry often requires a full quantum mechanical treatment of the system of interest. This usually implies that the time-dependent or the time-independent Schrödinger equation of the nuclear degrees of freedom (DOF) has to be solved explicitly. For small systems (up to six internal DOF) this can be done with standard methods, i.e., by directly sampling the quantum mechanical wavefunction on a (product-) grid and solving the Schrödinger equation on these grid points. Numerically, within the standard method the multi-dimensional quantum mechanical wavefunction is stored as an f-way tensor, where f is the number of DOF. Due to the linearity of the Schrödinger equation the resulting numerical tasks then usually reduce to standar...
"Grid-Computing", ein Mitte der 90er Jahre eingeführter Begriff, bezeichnet eine Architektur für verteilte Systeme, die auf dem World Wide Web aufbaut und die Web-Vision erweitert. Mit dem Grid-Computing werden die Ressourcen einer Gemeinschaft, einer sogenannten “virtuellen Organisation” (siehe unten), integriert. Die Hoffnung ist, dass hierdurch rechen- und/oder datenintensiven Aufgaben, die eine einzelne Organisation nicht lösen kann, handhabbar werden. Ein “Grid” bezeichnet eine nach dem Grid-Computing-Ansatz aufgebaute Rechner-, Netzwerk- und Software-Infrastruktur zur Teilung von Ressourcen mit dem Ziel, die Aufgaben einer virtuellen Organisation zu erledigen. Zu Beginn war die Möglichkeit, ungenutzte CPU-Ressourcen an anderen Stellen für die eigenen Aufgaben einzusetzen, die wesentlich treibende Kraft für erste Experimente. I...
The Workshop on Automatic Performance Analysis (WAPA 2005, Dagstuhl Seminar 05501), held December 13-16, 2005, brought together performance researchers, developers, and practitioners with the goal of better understanding the methods, techniques, and tools that are needed for the automation of performance analysis for high performance computing.
From 12.12.05 to 16.12.05, the Dagstuhl Seminar 05501 ``Automatic Performance Analysis'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.
With more and more machines achieving petascale capabilities, the focus is shifting towards the next big barrier, exascale computing and its possibilities and challenges. There is a common agreement that using machines on this level will definitively require co-design of systems and applications, and corresponding actions on different levels of software, hardware, and the infrastructure. Defining the vision of exascale computing for the community as providing capabilities on levels of performance at extreme scales, and identifying the role and mission of the involved experts from computer science has laid the basis for further discussions. By reflecting on the current state of petascale machines and technologies and identifying known bottlenecks and pitfalls looming ahead, this workshop derived the concrete barriers on the road towards...
The Dagstuhl Perspectives Workshop 12212 on "Co-Design of Systems and Applications for Exascale" is reaching into the future, where exascale systems with their capabilities provide new possibilities and challenges. The goal of the workshop has been to identify concrete barriers and obstacles, and to discuss ideas on how to overcome them. It is a common agreement that co-design across all layers, algorithms, applications, programming models, run-time systems, architectures, and infrastructures, will be required. The discussion between the experts identified a series of requirements on exascale co-design efforts, as well as concrete recommendations and open questions for future research.
Background: B cell malignancies are characterized by clonal expansion of B cells expressing tumor-specific idiotypes on their surface. These idiotypes are ideal target antigens for an individualized immunotherapy. However, previous idiotype vaccines mostly lacked efficiency due to a low immunogenicity of the idiotype. The objective of the present study was the determination of the feasibility, safety and immunogenicity of a novel chemically linked phage idiotype vaccine. Methods: In the murine B cell lymphoma 1 model, tumor idiotypes were chemically linked to phage particles used as immunological carriers. For comparison, the idiotype was genetically expressed on the major phage coat protein g8 or linked to keyhole limpet hemocynanin. After intradermal immunizations with idiotype vaccines, tolerability and humoral immune responses were...
Virtual high-throughput screening (vHTS) is an invaluable method in modern drug discovery. It permits screening large datasets or databases of chemical structures for those structures binding possibly to a drug target. Virtual screening is typically performed by docking code, which often runs sequentially. Processing of huge vHTS datasets can be parallelized by chunking the data because individual docking runs are independent of each other. The goal of this work is to find an optimal splitting maximizing the speedup while considering overhead and available cores on Distributed Computing Infrastructures (DCIs). We have conducted thorough performance studies accounting not only for the runtime of the docking itself, but also for structure preparation. Performance studies were conducted via the workflow-enabled science gateway MoSGrid (Mo...
Background: Multiple myeloma is characterized by clonal expansion of B cells producing monoclonal immunoglobulins or fragments thereof, which can be detected in the serum and/or urine and are ideal target antigens for patient-specific immunotherapies. Methods: Using phage particles as immunological carriers, we employed a novel chemically linked idiotype vaccine in a clinical phase I/II trial including 15 patients with advanced multiple myeloma. Vaccines composed of purified paraproteins linked to phage were manufactured successfully for each patient. Patients received six intradermal immunizations with phage idiotype vaccines in three different dose groups. Results: Phage idiotype was well tolerated by all study participants. A subset of patients (80% in the middle dose group) displayed a clinical response indicated by decrease or sta...
Hyperbolic conservation laws are important mathematical models for describing many phenomena in physics or engineering. The Finite Volume (FV) method and the Discon-tinuous Galerkin (DG) methods are two popular methods for solving conservation laws on computers. Those two methods are good candidates for parallel computing: • they require a large amount of uniform and simple computations, • they rely on explicit time-integration, • they present regular and local data access pattern. In this paper, we present several FV and DG numerical simulations that we have realized with the OpenCL and MPI paradigms. First, we compare two optimized implementations of the FV method on a regular grid: an OpenCL implementation and a more traditional OpenMP implementation. We compare the efficiency of the approach on several CPU and GPU architectures of ...