Current Project:
Scalable Consensus
Protocols and System Architectures for Blockchain Services
Bitcoin’s blockchain technology is emerging as an
important approach for decentralized management of digital assets ownership. A
blockchain is a replicated ledger maintained in a decentralized manner, without
requiring a central authority. A distributed consensus protocol is needed to
ensure a globally-agreed total order on the blocks in the chain. For this,
Bitcoin uses a technique called Proof-of-Work (PoW) which requires each node
wanting to append a new block to solve some hard cryptographic puzzle. This
PoW-based mechanism of Bitcoin provides probabilistic guarantees for consensus.
This leads to several inherent difficulties in scaling its performance. The PoW
based consensus technique also requires an inordinate amount of computing and
electrical power.
The goal of this project is to
investigate alternative approaches for building scalable blockchain services.
Our research aims to use of classical consensus protocols with Byzantine
Fault Tolerance (BFT), sharding for parallel execution of validation tasks,
alternate data models in place of a linear chain, and use of other trust models
and mechanisms in place of the PoW model. Our work is focused on hybrid
environments with both the permissioned model of user participation and
Bitcoin’s permissionless open access model. These two models present different
kinds of design constraints and challenges. Some of the approaches being
investigated are based on sharding and multi-chain structures for
performance scaling and storage efficiency. In place of the PoW model, alternate
trust models such Proof-of-Stake (PoS), Proof-of-Authority (PoA), and
Proof-of-Elapsed Time (PoET) are being considered in our work.
Recently Completed Project:
Supported by NSF Award 1319333
Computing resources provided by Minnesota Supercomputing Institute
Scalable Transaction
Management and Geo-Replication in Cloud Data Storage Services
In this
project we are developing scalable techniques for transaction management in
Cloud data storage services based on key-value based NoSQL models. Specifically
our current focus is on Hadoop/Hbase in a Cloud datacenter environment. Our
approach is based on Snapshot Isolation (SI) based concurrency control. We
are investigating scalable techniques and system architectures for supporting
serializable SI-based transactions on NoSQL data management services. The broad
goal is to provide autonomically scalable transaction management techniques
within a datacenter. Another thrust of this research is to develop techniques
for transaction management for geographically replicated data across multiple
datacenters. here we are exploring data management techniques supporting a
spectrum of consistency guarantees, ranging from eventual consistency,
causal consistency, to serializability. In this context we have developed
the Causally Coordinated Snapshot Isolation model for geo-replicated data.
Beehive: A Parallel Programming Framework for Graph Problems
(Open Source
software released under
GNU GPL V3 license)
The Beehive project's focus is on the development of a parallel
programming framework which will provide a simple, efficient, and robust model
for large-scale graph data analytics applications on large-scale clusters and
cloud computing environments. In such applications, parallelism tends to be
fine-grain and amorphous, which makes it difficult to extract parallelism at a
coarse-grain level using the commonly available techniques. For efficiently
harnessing fine-grain amorphous parallelism in graph problems, the the Beehive
computing model is based on speculative parallel execution of tasks in a cluster
computing environment. This approach is supported providing a key-value based
in-memory storage implemented on a cluster of computers, for storing and
manipulating graph data. The intermediate results of the parallel computations
all stored in the shared storage and exposed to all the processes. In this model
multiple tasks are scheduled to execute in parallel, and each task is executed
as a transaction, ensuring the atomicity and isolation properties of concurrent
task executions. This project is investigating techniques for supporting
transactional executing of parallel tasks and methods for ensuring
fault-tolerance in large-scale computing problems.
Past Projects
Middleware for Scalable
Location Based Services
In pervasive and mobile computing systems, there is a growing
interest in location-based services (LBS). The focus of this project is on the
development of a middleware architecture for building location-based services
(LBS). This project is addressing the requirements of a variety of LBS
applications such as find-a-friend in mobile social network groups,
location-based advertisements in M-commerce, traffic alerts and public safety
notifications, workflow management for mobile workers, public transit
information assistance for commuters, location-based messages, geo-notes, and
location-based micro-blogs. The goal of the project is to develop a middleware
architecture that would support design and deployment of such variety of
location-based services over the Internet. It would provide a fabric for
communication and interactions between the services and the mobile users based
on a publish/subscribe model. The middleware services and components would be
provisioned through Internet-based computing clusters or cloud environments. The
proposed research is investigating client-plus-cloud paradigms for fro
scalable deployment of LBS services.
Ellora Framework for Resilient
Internet Services
This project is
investigating a system architecture for building highly available services that
are resilient to such adverse conditions such as overload situations, attacks,
network outages. The techniques being investigated are based on dynamic
replication, relocation and regeneration of services in case of overload
conditions.
This research is utilizing the Ajanta mobile agent
framework
for deployment, relocation, and replication of service components. This project is being conducted using the facilities of the
PlanetLab infrastructure, which poses unique challenges as the resource
capacities available to a service are based on the proportional share model. The
resource capacity available at at node can change unpredictably due to usage by other users and
applications. This project is investigating techniques for dynamic scaling
of service capacity and load distribution. This project has developed autonomic
mechanisms for service scaling and fault-tolerance. We also developed
an infrastructure service, called for monitoring the PlanetLab nodes for available
resource capacities in order to assist a service agent in selecting a target
node for relocation.
Secure
Context Aware Distributed Collaboration Systems
This project developed a generative
programming based approach for developing context-aware applications in active
spaces. Building upon the programming model and the middleware developed in the
distributed collaboration system project, mechanisms and specification models
were developed for dynamic discovery and binding of ambient services and
resources in multi-user applications developed in active-space environments
based the context conditions and physical locations of users. The focus of our
work was on security and robustness. This work resulted in the development
of the Context-Aware Role Based Access Control Model (CA-RBAC) and an exception
handling model to deal with binding failures and services failures.
Policy Driven Secure Distributed Collaboration
This project developed a
policy-driven middleware for building secure distributed collaboration systems
from their high level specifications. Our specification model supports nested
collaboration activities and uses role based security policies and event count
based coordination specification. From the specifications of a collaboration
environment, appropriate policy modules are derived for enforcing security and
coordination requirements. A policy-driven distributed middleware provides
services to the users to join roles in an activity, perform role specific
operations, or create new activities.
In our model, a policy-driven collaboration system is realized in three
steps. Initially, the coordination and
security policy for a collaboration is specified based on a schema.
From the specification, various policy modules are derived for different kinds of
requirements, such as
role based security, object level access control, and event notification for coordination.
Finally, through these modules, the collaboration environment is realized
by a generic middleware.
We have developed a specification model using XML,
in which a collaborative system is defined in terms of activities, roles and objects.
The model allows dynamic assignments of roles,
''separation of duties'' constraints, multiple user participation in a role, active
security policies,
and hierarchical activity definitions. We used SPIN to develop model checking
techniques for verifying the security properties of a design.
Konark System Agent-Based Distributed Event Stream Processing
Konark is an
agent-based event stream processing system based on the Ajanta platform. In
Konark, mobile-agents are used for monitoring nodes in a network to detect
and communicate events to other agents for further filtering, aggregation and
correlation. Event are communicated among agents using a publish-subscribe
model. An agent can subscribe to events from different remote agents in the
network to perform event filtering and correlation functions. We also
developed policy-driven autonomic mechanisms for configuration and
resilient operations of an ensemble of Konark agents in an event stream
processing application. We used the Konark system for developing a
system for monitoring computers in our lab facilities for attacks and
intrusions. The Konark system was also used for building context detection
service based on monitoring of events from RFID readers and Bluetooth device
sensors for experiments in context aware distributed collaboration systems,
Ajanta
Project (1997-2002)
Ajanta mobile agent programming framework was developed during
1997-2002 at the University of Minnesota. A mobile agent is a Java object that
which cab securely migrate over the Internet to perform designated tasks
at one or more nodes.The Ajanta system
provides an infrastructure for secure and robust execution of mobile agents. In
a broad sense, a mobile agent is a program which represents a user in a network
and is capable of migrating autonomously from node to node, performing
computations on behalf of the user. The programmer can define agents as active
application components that traverse the network performing computations
relevant to their current location. For example, agents can be used
for information searching, filtering and retrieval, and for electronic
commerce on the Web, thus acting as personal assistants for their owners.
As tools for system administration, they can be used in lowlevel network
maintenance, testing, fault diagnosis, and for installing or upgrading software
on remote machines. Agents are also useful for extending or modifying the
capabilities of existing services by dynamically adding to their functionality.
Ajanta is implemented using the Java language and its security mechanisms are
designed based on Java's security model. It also makes use of several other
facilities of Java, such as object serialization, reflection, and remote method
invocation.
Sponsors
National
Science Foundation Awards: CNS 1319333,
CNS 0834357, NSF 0411961, ITR 0082215, ANI 0087514
, CNS 0708604, EIA 9818338,
ANIR 9813703,
Minnesota
Supercomputing Institute has provided computing resources for these
projects.
|