GCAI 2017: Papers with Abstracts

Papers
Abstract. Progress in satisfiability (SAT) solving has enabled answering long-standing open questions in mathematics completely automatically resulting in clever though potentially gigantic proofs. We illustrate the success of this approach by presenting the solution of the Boolean Pythagorean triples problem. We also produced and validated a proof of the solution, which has been called the “largest math proof ever”. The enormous size of the proof is not important. In fact a shorter proof would have been preferable. However, the size shows that automated tools combined with super computing facilitate solving bigger problems. Moreover, the proof of 200 terabytes can now be validated using highly trusted systems, demonstrating that we can check the correctness of proofs no matter their size.
Abstract. We extend the terminological formalism of the well-known description logic ALC from concept inclusions (CIs) to more general constraints formulated in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA). In QFBAPA one can formulate Boolean combinations of inclusion constraints and numerical constraints on the cardinalities of sets.
Our new formalism extends, on the one hand, so-called cardinality restrictions on concepts, which have been introduced two decades ago, and on the other hand the recently introduced statistical knowledge bases. Though considerably more expressive, our formalism has the same complexity (NExpTime) as cardinality restrictions on concepts. We will also introduce a restricted version of our formalism for which the complexity is ExpTime. This yields the until now unknown exact complexity of the consistency problem for statistical knowledge bases.
Abstract. Deep reinforcement learning has become popular over recent years, showing superiority on different visual-input tasks such as playing Atari games and robot navigation. Although objects are important image elements, few work considers enhancing deep reinforcement learning with object characteristics. In this paper, we propose a novel method that can incorporate object recognition processing to deep reinforcement learning models. This approach can be adapted to any existing deep reinforcement learning frameworks. State-of-the-art results are shown in experiments on Atari games. We also propose a new approach called “object saliency maps” to visually explain the actions made by deep reinforcement learning agents.
Abstract. Maximum entropy reasoning (ME-reasoning) based on relational conditionals combines both the capability of ME-distributions to express uncertain knowledge in a way that excellently fits to commonsense, and the great expressivity of an underlying first-order logic. The drawbacks of this approach are its high complexity which is generally paired with a costly domain size dependency, and its non-transparency due to the non-existent a priori independence assumptions as against in Bayesian networks. In this paper we present some independence results for ME-reasoning based on the aggregating semantics for relational conditionals that help to disentangle the composition of ME-distributions, and therefore, lead to a problem reduction and provide structural insights into ME-reasoning.
Abstract. Problem simplification is a topic of continuing interest in the field of constraint satisfaction. In this paper we examine properties associated with the basic idea of substitutability and show how certain forms of substitutability
can be organized into a strict hierarchy. One of these properties, here called replaceability, has been identified by other authors as being of special interest. In this work we confirm these earlier claims and show that replaceability is significant because it is the most general property in the hierarchy that allows inferences from local to global versions of this property. To make use of this discovery, we introduce two algorithms for establishing neighbourhood replaceability, and we present an initial experimental examination of these algorithms including ways to improve their performance through ordering heuristics and various kinds of inference.
Abstract. Two non deterministic algorithms for generalizing a solution of a constraint expressed in second order typed lambda-calculus are presented. One algorithm derives from the proof of completeness of the higher order unification rules by D. C. Jensen and T. Pietrzykowski, the other is abstracted from an algorithm by N. Peltier and the author for generalizing proofs. A framework is developed in which such constrained generalization algorithms can be designed, allowing a uniform presentation for the two algorithms. Their relative strength at generalization is then analyzed through some properties of interest: their behaviour on valid and first order constraints, or whether they may be iterated or composed.
Abstract. Research on iterated belief change has focussed mostly on belief revision, only few papers have addressed iterated belief contraction. Most prominently, Darwiche and Pearl published seminal work on iterated belief revision the leading paradigm of which is the so-called principle of conditional preservation. In this paper, we use this principle in a thoroughly axiomatized form to develop iterated belief contraction operators for Spohn's ranking functions. We show that it allows for setting up constructive approaches to tackling the problem of how to contract a ranking function by a proposition or a conditional, respectively, and that semantic principles can also be derived from it for the purely qualitative case.
Abstract. We apply genetic algorithms (GAs) to evolve cyclic finite automata for scheduling the dispatch of trucks in mines. The GA performs well generally, and on problems which include one-lane roads, the GA was able to find solutions that utilised shovels very well, with low contention and using fewer trucks than both the widely-used linear programming DISPATCH algorithm, and commonly-used greedy heuristics. The GA provides significant cost-savings, or production increases, on problems where alternative algorithms do not adapt well.
Abstract. Herbrand structures are a subclass of standard first-order structures commonly used in logic and automated reasoning due to their strong definitional character. This paper is devoted to the logics induced by them: Herbrand and semi-Herbrand logics, with and without equality. The rich expressiveness of these logics entails that there is no adequate effective proof system for them. We therefore introduce infinitary proof systems for Herbrand logics, and prove their completeness. Natural and sound finitary approximations of the infinitary systems are also presented.
Abstract. In this work we significantly increase the performance of the Vampire and E automated theorem provers (ATPs) on a set of loop theory problems. This is done by developing EmpireTune, an AI system that automatically invents targeted search strategies for Vampire and E. EmpireTune extends previous strategy invention systems in several ways. We have developed support for the Vampire prover, further extended Vampire by new mechanisms for specifying term orderings, and EmpireTune can now automatically invent suitable term ordering for classes of problems. We describe the motivation behind these additions, their implementation in Vampire and EmpireTune, and evaluate the systems with very good results on the AIM (loop theory) ATP benchmark.
Abstract. Decision processes in Artificial Intelligence are often organized hierarchically. One example is robot navigation with a global path planner and a local executor. This paper examines whether a shift from optimizing the two typical modules in navigation towards a dynamic interaction of more, not necessarily hierarchically linked, modules leads to robust navigation behavior. We empirically evaluate different organizations of modules for navigation in a simulated household with a simulated PR2 robot.
Abstract. Nature showcases swarms of animals and insects performing various complex tasks efficiently where capabilities of individuals alone in the swarm are often quite limited. Swarm intelligence is observed when agents in the swarm follow simple rules which enable the swarm to perform certain complex tasks. This decentralized approach of nature has inspired the artificial intelligence community to apply this approach to engineered systems. Such systems are said to have no single point of failure and thus tend be more resilient. The aim of this paper is to put this notion of resilience to the test and quantify the robustness of two swarm algorithms, namely ¸"swarmtaxis" and "FSTaxis". The first simulation results of the effects of introducing an impairment in agent-to-agent interactions in these two swarm algorithms are presented in this paper. While the FSTaxis algorithm shows a much higher resilience to agent-to-agent communication failure, both the FSTaxis and swarmtaxis algorithms are found to have a non-zero tolerance towards such failures.
Abstract. The \textit{second-order Copeland} voting scheme is NP-complete to manipulate even if a manipulator has perfect information about the preferences of other voters in an election.~A recent work proposes a \textit{branch-and-bound} heuristic for manipulation of second-order Copeland elections.~The work shows that there are instances of the elections that may be manipulated using the branch-and-bound heuristic.~However, the performance of the heuristic degraded for fairly large number of candidates in elections.~We show that this heuristic is \textit{exponential} in the number of candidates in an election, and propose an improved heuristic that extends this previous work.~Our improved heuristic is based on \textit{randomization technique} and is shown to be \textit{polynomial} in the number of candidates in an election.~We also account for the number of samples required for a given accuracy and the probability of missing the accurate value of the number of manipulations in an election.
Abstract. We present a method we call structure-based preferential bumping,
as a low-cost way to exploit formula structure in VSIDS-based SAT solvers.
We show that the Glucose SAT solver, when modified with preferential bumping
of certain easily identified structurally important variables,
out-performs unmodified Glucose on the industrial formulas
from recent SAT solver competitions.
Abstract. In this paper we study Secrecy-Preserving Query Answering problem under
the OpenWorld Assumption (OWA) for Prob-EL>0;=1 Knowledge Bases
(KBs). We have designed a tableau procedure to compute a semi model M
over the given KB which eventually is equivalent to a probabilistic model
to KB. Given a secrecy set S, which is a finite set of assertions, we
compute a function E, called an envelope of S, which assigns a set E() of
assertions to each world in the semi modal M. E provides logical protection to the secrecy set S against the reasoning of a querying agent. Once the semi model M and an envelope E are computed, we define the secrecy-preserving semi model ME.
Based on the information available in ME, assertional queries with probabilistic
operators can be answered eciently while preserving secrecy. To
the best of our knowledge, this work is first one studying secrecy-preserving
reasoning in description logic augmented with probabilistic operators. When
the querying agent asks a query q, the reasoner answers “Yes” if information
about q is available in ME; otherwise, the reasoner answers “Unknown”. Being
able to answer “Unknown” plays a key role in protecting secrecy under
OWA. Since we are not computing all the consequences of the knowledge
base, answers to the queries based on just secrecy-preserving semi model
ME could be erroneous. To fix this problem, we further augment our algorithms
by providing recursive query decomposition algorithm to make the
query answering procedure foolproof.
1
Abstract. In 1992, Stuart Russell briefly introduced a series of memory efficient optimal search algorithms. Among which is the Simplified Memory-bounded A Star (SMA*) algorithm, unique for its explicit memory bound. Despite progress in memory efficient A Star variants, search algorithms with explicit memory bounds are absent from progress. SMA* remains the premier memory bounded optimal search algorithm. In this paper, we present an enhanced version of SMA* (SMA*+), providing a new open list, simplified implementation, and a culling heuristic function, which improves search performance through a priori knowledge of the search space. We present benchmark and comparison results with state-of-the-art optimal search algorithms, and examine the performance characteristics of SMA*+.
Abstract. Artificial Neural Network (ANN) has been well recognized as an effective tool in medical science. Medical data is complex, and currently collected data does not flow in a standardized way. Data complexity makes the outcomes associated with intra- operative blood management difficult to assess. Reliable evidence is needed to selectively define areas that can be improved and establish standard protocols across healthcare service lines to substantiate best practice in blood product utilization. The ANN is able to provide this evidence using automatic learning techniques to mine the hidden information under the medical data and come to conclusions.
Blood transfusions can be lifesaving and are used commonly in complex surgical cases. Blood transfusions come with associated risks and are costly. Anemia and clinical symptoms are currently used to determine whether a packed red blood cell transfusion is necessary. In this paper, we worked with unique datasets of intra- operative blood management collected from the electronic medical record of the Keck Medical Center of USC. We apply Multilayer Perceptron Neural Network to estimate missing values and predict the degree of post-operative anemia. Successful predictions of postoperative anemia may help inform medical practitioners whether there is a need for a further packed red blood cell transfusion.
Abstract. Nowadays, the use of artificial neural networks (ANN), in particular the Multilayer Perceptron (MLP), is very popular for executing different tasks such as pattern recognition, data mining, and process automation. However, there are still weaknesses in these models when compared with human capabilities. A characteristic of human memory is the ability for learning new concepts without forgetting what we learned in the past, which has been a disadvantage in the field of artificial neural networks. How can we add new knowledge to the network without forgetting what has already been learned, without repeating the exhaustive ANN process? In an exhaustively training is used a complete training set, with all objects of all classes.

In this work, we present a novel incremental learning algorithm for the MLP. New knowledge is incorporated into the target network without executing an exhaustive retraining. Objects of a new class integrate this knowledge, which was not included in the training of a source network. The algorithm consists in taking the final weights from the source network, doing a correction of these with the Support Vector Machine tools, and transferring the obtained weights to a target network. This last net is trained with a training set that it is previously preprocessed. The efficiency resulted of the target network is comparable with a net that is exhaustively trained.
Abstract. We propose using abduction for inferring implicit rules for Smart City ontologies.
We show how we can use Z3 to extract candidate abducers from partial ontologies and leverage them in an iterative process of evolving an ontology by refining relations and restrictions, and populating relations.

Our starting point is a Smart City initiative of the city of Barcelona, where a substantial ontology is being developed to support processes such as city planning, social services, or improving the quality of the data concerning (for instance) legal entities, whose incompleteness may sometimes hide fraudulent behavior. In our scenario we are supporting semantic queries over heterogeneous and noisy data. The approach we develop would allow evolving ontologies in an iterative fashion as new relations and restrictions are discovered.
Abstract. Incorporating a dynamic kick engine that is both fast and effective is pivotal to be competitive in one of the world’s biggest AI and robotics initiatives: RoboCup. Using the NAO robot as a testbed, we developed a dynamic kick engine that can generate a kick trajectory with an arbitrary direction without prior input or knowledge of the parameters of the kick. The trajectories are generated using cubic splines (two degree three polynomials with a via-point), cubic Hermite splines or sextics (one six degree polynomial). The trajectories are executed while the robot is dynamically balancing on one foot. Although a variety of kick engines have been implemented by others, there are only a few papers that demonstrate how kick engine parameters have been optimized to give an effective kick or even a kick that minimizes energy consumption and time. Parameters such as kick configuration, limits of the robot, or shape of the polynomial can be optimized. We propose an optimization framework based on the Webots simulator to optimize these parameters. Experiments on different joint interpolators for kick motions have been observed to compare results.