Últimos itens adicionados do Acervo: Harvard University

A Universidade de Harvard (em inglês: Harvard University) é uma universidade privada membro da Ivy League, localizada em Cambridge, Massachusetts, Estados Unidos, e cuja história, influência e riqueza tornam-a uma das mais prestigiadas universidades do mundo.

Página 3 dos resultados de 27340 itens digitais encontrados em 0.006 segundos

‣ Private vs. Political Choice of Securities Regulation: A Political Cost/Benefit Analysis

Coates, John
Fonte: Virginia Journal of International Law Association Publicador: Virginia Journal of International Law Association
Tipo: Artigo de Revista Científica
Português

‣ Review of "Trusting What You're Told: How Children Learn from Others"

Warneken, Felix
Fonte: The University of Chicago Press Publicador: The University of Chicago Press
Tipo: Commentary or Review
Português
Psychology

‣ An evaluation of the FDA's analysis of the costs and benefits of the graphic warning label regulation

Chaloupka, Frank J; Warner, Kenneth E; Acemoğlu, Daron; Gruber, Jonathan; Laux, Fritz; Max, Wendy; Newhouse, Joseph; Schelling, Thomas; Sindelar, Jody
Fonte: BMJ Publishing Group Publicador: BMJ Publishing Group
Tipo: Artigo de Revista Científica
Português
The Family Smoking Prevention and Tobacco Control Act of 2009 gave the Food and Drug Administration (FDA) regulatory authority over cigarettes and smokeless tobacco products and authorised it to assert jurisdiction over other tobacco products. As with other Federal agencies, FDA is required to assess the costs and benefits of its significant regulatory actions. To date, FDA has issued economic impact analyses of one proposed and one final rule requiring graphic warning labels (GWLs) on cigarette packaging and, most recently, of a proposed rule that would assert FDA’s authority over tobacco products other than cigarettes and smokeless tobacco. Given the controversy over the FDA's approach to assessing net economic benefits in its proposed and final rules on GWLs and the importance of having economic impact analyses prepared in accordance with sound economic analysis, a group of prominent economists met in early 2014 to review that approach and, where indicated, to offer suggestions for an improved analysis. We concluded that the analysis of the impact of GWLs on smoking substantially underestimated the benefits and overestimated the costs, leading the FDA to substantially underestimate the net benefits of the GWLs. We hope that the FDA will find our evaluation useful in subsequent analyses...

‣ AvaDrone: An Autonomous Drone for Avalanche Victim Recovery

Dickensheets, Benjamin D.
Fonte: Harvard University Publicador: Harvard University
Tipo: Thesis or Dissertation; text Formato: application/pdf
Português
For the 179 Americans that are caught in avalanches each year, timely recovery often means the difference between life and death. The goal of this project was to design and build a prototype drone for a system to quickly and automatically locate a buried victim, using an on-board antenna to receive a signal from industry standard transmitting beacons. The design was based on a quad-rotor platform and uses Arduino hardware to receive a beacon signal and navigate the craft. In broad strokes, this project is an effort to apply the new and exciting technology of hobby drones to the well-established application of avalanche victim recovery. Current avalanche beacon technologies suffer from challenges associated with user operation. Slow or untrained human searchers are poorly equipped to handle the challenges of a fast-paced search. The vision of an entirely autonomous solution to this problem has guided the project from its inception. This idea has been little explored despite a proliferation of drone technology in recent years. On one hand, all of the pieces of the project already exist in one form or another. Avalanche beacon technologies continue to mature, as do hobby drones and their application. This project builds on precisely these preexisting pieces...

‣ WeighTrack 2.0

Kugler, Tyler Reed
Fonte: Harvard University Publicador: Harvard University
Tipo: Thesis or Dissertation; text Formato: application/pdf
Português
In the growing field of the Internet of Things, the interactions with our surrounding environments are growing smarter and more reliant on data collection and analytics. WeighTrack 2.0 adds intelligence to liquid inventories, allowing users to precisely monitor the amount of liquid content in a bottle at any given time, without directly interacting with the fluid. Through the implementation of RFID technology, load-cell networks, and a Wi-Fi-enabled micro-controller, WeighTrack 2.0 provides a platform for dynamically tracking liquid consumption in labs, restaurants, and households with Internet capabilities open to web developers.

‣ Engineering Ingenium: Improving Engagement and Accuracy With the Visualization of Latin for Language Learning

Zhou, Sharon
Fonte: Harvard University Publicador: Harvard University
Tipo: Thesis or Dissertation; text Formato: application/pdf
Português
The goal of Ingenium is to prompt beginning Latin students to think consciously and critically on Latin grammar prior to translating a sentence, while engaging them with the grammar in an intuitive and hands-on way. Learners commonly make errors in reading Latin, because they do not fully understand the impact of Latin’s grammatical structure—its morphology and syntax—on a sentence’s meaning. Synthesizing instructional methods used for Latin and artificial programming languages, Ingenium visualizes the logical structure of grammar by making each word into a puzzle block, whose shape and color reflect the word’s morphological forms and roles. Ingenium is designed so that students do not focus on words in isolation, but make logical connections between words and group words together, so that the number of elements involved in the translation, or the cognitive load, is instantly reduced. For this reason, puzzle blocks only fit together if there is sound grammatical logic, preventing students from making syntactic errors and allowing them to experiment in a mistake-free environment. The blocks also serve to abstract out the grammatical terminology in favor of visual representation, making it easy for Ingenium to supplement current methods of Latin instruction and to maximize its adoption potential. The audience of Ingenium is novice Latin students. When students’ experience and confidence are at their lowest...

‣ Helping Hand or Queen Bee? The Impact of Senior-Level Women on Junior-Level Women Within Organizations

Wiegand, Tessa
Fonte: Harvard University Publicador: Harvard University
Tipo: Thesis or Dissertation; text Formato: application/pdf
Português
This paper uses the relationship between female partners and associates in the 200 largest United States law firms to explore the impact of senior-level women on junior-level women. I look within firms to see how the percentage of female associates changes based upon the percentage of female partners and how other mechanisms effect the causality of that relationship. I find that a 10-percentage point increase in female partners leads to a 4.7-percentage point increase in female associates, but approximately half of the effect is due to fixed factors within firms and years. This effect is asymmetric; increases in female partners have much larger effects on female associates than do decreases. Female partners also have a greater effect in firms with fewer female partners. Next, I use time lags and find that while female partners have a significant impact on female associates’ retention, the decision to join the firm is influenced by other female associates. Furthermore, I find that the female partners present when current associates were summer associates have a negative impact on the full-time hiring of female associates and their decision to join the firm full-time. Finally, I find that the positive impact of female partners was substantially mitigated during the Global Financial Crisis

‣ The effect of quasi-identifier characteristics on statistical bias introduced by k-anonymization

Angiuli, Olivia Marie
Fonte: Harvard University Publicador: Harvard University
Tipo: Thesis or Dissertation; text Formato: application/pdf
Português
The de-identification of publicly released datasets that contain personal information is necessary to preserve personal privacy. One such de-identification algorithm, k-anonymization, reduces the risk of the re-identification of such datasets by requiring that each combination of information-revealing traits be represented by at least k different records in the dataset. However, this requirement may skew the resulting dataset by preferentially deleting records that contain more rare information-revealing traits. This paper investigates the amount of bias and loss of utility introduced into an online education dataset by the k-anonymization process, as well as suggesting future directions that may decrease the amount of bias introduced during de-identification procedures.

‣ Modelling Mechanical Interactions Between Cancerous Mammary Acini

Wang, Jeffrey Bond
Fonte: Harvard University Publicador: Harvard University
Tipo: Thesis or Dissertation; text Formato: application/pdf
Português
The rules and mechanical forces governing cell motility and interactions with the extracellular matrix of a tissue are often critical for understanding the mechanisms by which breast cancer is able to spread through the breast tissue and eventually metastasize. Ex vivo experimentation has demonstrated the the formation of long collagen fibers through collagen gels between the cancerous mammary acini responsible for milk production, providing a fiber scaffolding along which cancer cells can disorganize. We present a minimal mechanical model that serves as a potential explanation for the formation of these collagen fibers and the resultant motion. Our working hypothesis is that cancerous cells induce this fiber formation by pulling on the gel and taking advantage of the specific mechanical properties of collagen. To model this system, we present a hybrid method where we employ a new Eulerian, fixed grid simulation known as the Reference Map Method to model the collagen as a nonlinear viscoelastic material coupled with a multi-agent model to describe individual cancer cells. We find that these phenomena can be explained two simple ideas: cells pull collagen radially inwards and move towards the tension gradient of the collagen gel, while being exposed to standard adhesive and collision forces. From a computational perspective...

‣ Building N Birds With 1 Store: Parallel Simulations of Stochastic Evolutionary Processes

Janitsch, William
Fonte: Harvard University Publicador: Harvard University
Tipo: Thesis or Dissertation; text Formato: application/pdf
Português
Stochastic processes are used to study the dynamics of evolution in finite, structured populations. Simulations of such processes provide a useful tool for their study, but are currently limited by computational speed and memory bottlenecks, even when naively parallelized. This thesis proposes two novel parallelization methods for simulating a particular class of evolutionary processes known as "games on graphs." The theoretical speed-up and scalability of these methods is analyzed across various parameters. A novel approximate parallel method is also proposed, which allows for further speed-up at the expense of some accuracy. Discussion of implementation considerations follows, and a resulting implementation in Python is used to provide empirical performance results which match closely with theoretical ones. Applications are suggested for a variety of open problems in biology, behavioral economics, political science, and linguistics.

‣ Expansion in Lifts of Graphs

Makelov, Aleksandar A.
Fonte: Harvard University Publicador: Harvard University
Tipo: Thesis or Dissertation; text Formato: application/pdf
Português
The central goal of this thesis is to better understand, and explicitly construct, expanding towers G_1,G_2,..., which are expander families with the additional constraint that G_{n+1} is a lift of G_n. A lift G of H is a graph that locally looks like H, but globally may be different; lifts have been proposed as a more structured setting for elementary explicit constructions of expanders, and there have recently been promising results in this direction by Marcus, Spielman and Srivastava [MSS13], Bilu and Linial [BL06], and Rozenman, Shalev and Wigderson [RSW06]; besides that, expansion in lifts is related to the Unique Games Conjecture (e.g., Arora et al [AKK+08]). We develop the basic theory of spectral expanders and lifts in the generality of directed multigraphs, and give some examples of their applications. We then derive some group-theoretic structural properties of towers, and show that a large class of commonly used graph operations "respect" lifts. These two insights allow us to give a different perspective on an existing construction [RSW06], show that standard iterative constructions of expanders can be adjusted to give expander towers almost "for free", and give a new elementary construction, along the lines of Ben-Aroya and Ta-Shma [BATS11]...

‣ The Differential Privacy of Bayesian Inference

Zheng, Shijie
Fonte: Harvard University Publicador: Harvard University
Tipo: Thesis or Dissertation; text Formato: application/pdf
Português
Differential privacy is one recent framework for analyzing and quantifying the amount of privacy lost when data is released. Meanwhile, multiple imputation is an existing Bayesian-inference based technique from statistics that learns a model using real data, then releases synthetic data by drawing from that model. Because multiple imputation does not directly release any real data, it is generally believed to protect privacy. In this thesis, we examine that claim. While there exist newer synthetic data algorithms specifically designed to provide differential privacy, we evaluate whether multiple imputation already includes differential privacy for free. Thus, we focus on several method variants for releasing the learned model and releasing the synthetic data, and how these methods perform for models taking on two common distributions: the Bernoulli and the Gaussian with known variance. We prove a number of new or improved bounds on the amount of privacy afforded by multiple imputation for these distributions. We find that while differential privacy is ostensibly achievable for most of our method variants, the conditions needed for it to do so are often not realistic for practical usage. At least in theory, this is particularly true if we want absolute privacy (ε-differential privacy)...

‣ Security Analysis of Java Web Applications Using String Constraint Analysis

Li, Louis
Fonte: Harvard University Publicador: Harvard University
Tipo: Thesis or Dissertation; text Formato: application/pdf
Português
Web applications are exposed to myriad security vulnerabilities related to malicious user string input. In order to detect such vulnerabilities in Java web applications, this project employs string constraint analysis, which approximates the values that a string variable in a program can take on. In string constraint analysis, program analysis generates string constraints -- assertions about the relationships between string variables. We design and implement a dataflow analysis for Java programs that generates string constraints and passes those constraints to the CVC4 SMT solver to find a satisfying assignment of string variables. Using example programs, we illustrate the feasibility of the system in detecting certain types of web application vulnerabilities, such as SQL injection and cross-site scripting.

‣ Strong Update for Object-Oriented Flow-Sensitive Points-To Analysis

Chao, Ling-Ya Monica
Fonte: Harvard University Publicador: Harvard University
Tipo: Thesis or Dissertation; text Formato: application/pdf
Português
Points-to analysis is a static analysis that approximates which memory locations each program expression may refer to. Many client analyses use points-to information to optimize compilers or reason about program security. The effectiveness of the client analyses relies on the precision of the points-to analysis. Flow-sensitive points-to analyses compute points-to information per program point, providing additional precision over flow-insensitive points-to analyses. We present a points-to analysis for object-oriented programs that is specifically designed to enable strong update, which is particularly useful in object-oriented languages as it can enable precise reasoning about object invariants established during object construction. We enable strong update by using the recency abstraction: each allocation site is represented by two abstract objects, the most-recently-allocated object and any non-most-recently allocated objects. By definition, the fields of a most-recently-allocated abstract object correspond to a single concrete memory location and can thus be strongly updated. Our analysis is implemented for Java bytecode. It is scalable (130k lines of code can be analyzed in 92 seconds), and significantly improves the precision of some client analyses...

‣ Set Reconciliation and File Synchronization Using Invertible Bloom Lookup Tables

Gentili, Marco
Fonte: Harvard University Publicador: Harvard University
Tipo: Thesis or Dissertation; text Formato: application/pdf
Português
As more and more data migrate to the cloud, and the same files become accessible from multiple different machines, finding effective ways to ensure data consistency is becoming increasingly important. In this thesis, we cover current methods for efficiently maintaining sets of objects without the use of logs or other prior context, which is better known as the set reconciliation problem. We also discuss the state of the art for file synchronization, including methods that use set reconciliation techniques as an intermediate step. We explain the design and implementation of a novel file synchronization protocol tailored to minimize transmission complexity and targeted for files with relatively few changes. We also propose an extension of our file synchronization protocol for more general file directory synchronization. We describe IBLTsync, our implementation of the aforementioned file synchronization protocol, and benchmark it against a naïve file transmission protocol and rsync, a popular file synchronization library. We find that for files with relatively few changes, IBLTsync transmits significantly less data than the naïve protocol, and moderately less data than rsync. In addition, we provide the first (to our knowledge) implementation of multi-party set reconciliation using Invertible Bloom Lookup Tables...

‣ Memory Abstractions for Data Transactions

Herman, Nathaniel
Fonte: Harvard University Publicador: Harvard University
Tipo: Thesis or Dissertation; text Formato: application/pdf
Português
This thesis presents STO, a software transactional memory (STM) based not on low-level reads and writes on memory, but on datatypes—arrays, lists, queues, hash tables, and so forth—that explicitly support transactional operations. Conventional STMs allow programmers to write concurrent code in much the same way as sequential code—thereby more easily taking advantage of multiple CPU cores. However, these conventional STMs track every memory word accessed during a transaction, so even simple operations can perform many more memory accesses for synchronization than is strictly required for transactional correctness. Our insight is that concurrent data structures can generate fewer superfluous accesses, and use more efficient concurrency protocols, when transaction bookkeeping tracks high-level operations like “insert node into tree.” We test our ideas on the STAMP benchmark suite for STM applications, on a high-performance in-memory database, and on a previously single-threaded program that we extend to multithreaded operation. We find that datatypes can support transactional operations without too much trouble; that relatively naive users can build simple transaction support into their own data structures; and that our typed STM can outperform and outscale conventional...

‣ Leveraging Human Brain Activity to Improve Object Classification

Fong, Ruth Catherine
Fonte: Harvard University Publicador: Harvard University
Tipo: Thesis or Dissertation; text Formato: application/pdf
Português
Today, most object detection algorithms differ drastically from how humans tackle visual problems. In this thesis, I present a new paradigm for improving machine vision algorithms by designing them to better mimic how humans approach these tasks. Specifically, I demonstrate how human brain activity from functional magnetic resonance imaging (fMRI) can be leveraged to improve object classification. Inspired by the graduated manner in which humans learn, I present a novel algorithm that simulates learning in a similar fashion by more aggressively penalizing the misclassification of certain training datum. I propose a method to learn annotations that capture the difficulty of detecting an object in an image from auxilliary brain activity data. I then demonstrate how to leverage these annotations by using a modified definition of Support Vector Machines (SVMs) that uses these annotations to weight training data in an object classification task. An experimental comparison between my procedure and a parallel control shows that my techniques provide significant improvements in object classification. In particular, my protocol empirically halved the gap in classification accuracy between SVM classifiers that used state-of-the-art, yet computationally intensive convolutional neural net (CNN) features and those that used out-of-the box...

‣ Show Me the Money: Examining the Validity of the Contract Year Phenomenon in the NBA

Ryan, Julian
Fonte: Harvard University Publicador: Harvard University
Tipo: Thesis or Dissertation; text Formato: application/pdf
Português
The media narrative of the ‘contract year effect’ is espoused across all major professional American sports leagues, particularly the MLB and NBA. In line with basic incentive theory, this hypothesis has been shown to be true in baseball, but the analysis in basketball to this point has been flawed. In estimating the contract year effect in the NBA, this paper is the first to define rigorously the various states of contract incentives, the ignorance of which has been a source of bias in the literature thus far. It further expands on previous analyses by measuring individual performance more broadly across a range of advanced metrics. Lastly, it attempts to account for the intrinsic endogeneity of playing in a contract year, as better players get longer contracts and are thus less likely to be in a contract year, by using exogenous variations in the NBA’s contract structure to form an instrument, and by comparing performance to a priori expectations. In this manner, this paper produces the first rigorous finding of a positive contract year phenomenon. The estimated effect is about half that found in baseball, equivalent to a 3-5 percentile boost in performance for the median player in the NBA.

‣ The Strontium Isotope Record of Zavkhan Terrane Carbonates: Strontium Isotope Stability Through the Ediacaran-Cambrian Transition

Petach, Tanya N.
Fonte: Harvard University Publicador: Harvard University
Tipo: Thesis or Dissertation; text Formato: application/pdf
Português
First order trends in the strontium isotopic (87Sr/86Sr) composition of seawater are controlled by radiogenic inputs from the continent and non-radiogenic inputs from exchange at mid-ocean ridges. Carbonates precipitated in seawater preserve trace amounts of strontium that record this isotope ratio and therefore record the relative importance of mid-ocean ridge and weathering chemical inputs to sea water composition. It has been proposed that environmental changes during the Ediacaran-Cambrian transition may have enabled the rapid diversification of life commonly named the “Cambrian explosion.” Proposed environmental changes include 2.5x increase in mid-ocean ridge spreading at the Ediacaran-Cambrian boundary and large continental fluxes sediment into oceans. These hypotheses rely on a poorly resolved strontium isotope curve to interpret Ediacran-Cambrian seawater chemistry. A refined strontium isotope curve through this time period may offer insight into the environmental conditions of the early Cambrian. New age models and detailed mapping in the Zavkhan terrane in west-central Mongolia provide the context necessary for robust geochemical analysis. This study aims to better resolve the coarse strontium isotope curve for the early Cambrian period by analyzing carbonate sequences in the Zavkhan basin. These carbonate sections are rapidly deposited...

‣ Turning Big Data Into Small Data: Hardware Aware Approximate Clustering With Randomized SVD and Coresets

Moon, Tarik Adnan
Fonte: Harvard University Publicador: Harvard University
Tipo: Thesis or Dissertation; text Formato: application/pdf
Português
Organizing data into groups using unsupervised learning algorithms such as k-means clustering and GMM are some of the most widely used techniques in data exploration and data mining. As these clustering algorithms are iterative by nature, for big datasets it is increasingly challenging to find clusters quickly. The iterative nature of k-means makes it inherently difficult to optimize such algorithms for modern hardware, especially as pushing data through the memory hierarchy is the main bottleneck in modern systems. Therefore, performing on-the-fly unsupervised learning is particularly challenging. In this thesis, we address this challenge by presenting an ensemble of algorithms to provide hardware-aware clustering along with a road-map for hardware-aware machine learning algorithms. We move beyond simple yet aggressive parallelization useful only for the embarrassingly parallel parts of the algorithms by employing data reduction, re-factoring of the algorithm, as well as, parallelization through SIMD commands of a general purpose processor. We find that careful engineering employing the SIMD instructions available by the processor and hand-tuning reduces response time by about 4 times. Further, by reducing both data dimensionality and data-points by PCA and then coreset-based sampling we get a very good representative sample of the dataset. Running clustering on the reduced dataset...