Papers by Dipankar Sarkar
7th International Symposium on Quality Electronic Design (ISQED'06)
This paper describes a formal method for checking the equivalence between the finite state machin... more This paper describes a formal method for checking the equivalence between the finite state machine with data path (FSMD) model of the high-level behavioural specification and the FSMD model of the behaviour transformed by the scheduler. The method consists in introducing cutpoints in one FSMD, visualizing its computations as concatenation of paths from cutpoints to cutpoints and finally, identifying equivalent finite path segments in the other FSMD; the process is then repeated with the FSMDs interchanged. The method is strong enough to accommodate merging of the segments in the original behaviour by the typical scheduler such as DLS, a feature very common in scheduling but not captured by many works reported in the literature. It also handles arithmetic transformations.

2007 International Conference on Computing: Theory and Applications (ICCTA'07), 2007
The variables of the high-level specifications and the automatically generated temporary variable... more The variables of the high-level specifications and the automatically generated temporary variables are mapped on to the data-path registers during data-path synthesis phase of high-level synthesis process. The registers in the datapath are usually shared by the variables and the mapping is not bijective as most of the high-level synthesis tools perform register optimization. In this paper, a formal methodology for verifying the correctness of register sharing is described. The input and the output of the data-path synthesis phase are represented as finite state machines with datapaths (FSMD). The method is based on checking equivalence of two FSMDs. Our technique is independent of the mechanism used for register optimization and works for both carrier and value based register optimization. The method also works for both data intensive and control intensive input specification. Our current implementation is integrated with an existing synthesis tool and has been tested for robustness.

Proceedings of the 17th great lakes symposium on Great lakes symposium on VLSI - GLSVLSI '07, 2007
To cope with the increasing design complexity and size, significant efforts are being made in hig... more To cope with the increasing design complexity and size, significant efforts are being made in high level synthesis methodologies, design languages and verification methodologies to leverage the expressive power of high-level models and reduce the design cycle time and cost. High level synthesis is a process of generating a concrete structure to the high level design meeting, three main design constraints: area, timing and power [1-4]. Verification, on the other hand, ensures that the design meets the specification. As of today, verification efforts take about 70% of design cycle. At structural level, the formal verification techniques are quite matured and advanced compared to those at the high level. Part of the reason is due to a general skepticism among designers in regards to the verification efforts made on the abstract model of the design instead on the structured design which is closer to the design on chip. To leverage off recent advancements made in Boolean SAT-based verification techniques [5] and to reduce the skepticism among the designers, we believe that HLS methodologies should also focus on another dimension, i.e., functional verification. Though this dimension is not part of standard design constraints, but it can help to substantially reduce the verification effort. HLS, in general, does not necessarily synthesize models that are verification "friendly". On at least the following three counts, we find that verification tools perform poorly on the models synthesized by HLS. • Area optimization: Given limited hardware resources in a typical HLS problem, the structural level design synthesized will have a large number of multiplexers as compared to a design synthesized under the assumption of unlimited resources. This multiplexer count is increased due to sharing of the limited operators. It is quite well-established in SAT community that multiplexers, in general, are not good for SAT engines and hence, SATbased verification engines are affected adversely. • Sequential scheduling: Because of the limited resources constraint, the synthesized design is sequentially deeper. Sequential depth can also increase due to multi-cycle operators. Such increase in the sequential depth results in time-consuming deeper searches by verification engines, adversely affecting its performance and also requiring debugging of longer error traces. • Memory optimization: Traditionally, HLS uses explicit memory modeling for embedded memories. However it has been shown that [6] for embedded memories, Efficient Memory Model (EMM) for verification is far superior to an explicit model. We have experimented with HLS tool Cyber [1], integrated with a SAT-based model checking tool, DiVer [5]. These tools have state-of-the-art high level synthesis and verification algorithms respectively and are widely adopted by NEC designers. We used Cyber to synthesize a design in Behavioral Description Language (BDL) in two modes: (a) with maximal sharing of operators and (b) with minimal sharing of operators. Design I synthesized in mode (a) has 2450 multiplexers (muxes) with 31 fsm states. Design II synthesized in mode (b) has 2144 muxes with 25 fsm states. The verification time on a safety property on the synthesized Design I was 12s and on Design II was 6s. Though the area is increased in Design II, the search is improved as the problem partitioning is done at higher level (by removal of muxes). Such partitioning can also be done at bit-level by SAT decision procedure, but is much less effective. Clearly, objectives of optimization for synthesis and verification are not the same. Verification complexity is growing exponentially with design complexity. As high level synthesis tools are slowly and steadily becoming popular among designers, we believe that it would be almost impossible for HLS providers to ignore "synthesis for verification" paradigm. In order to bridge the widening gap, we propose the use of existing infrastructure of HLS to generate verification friendly models and properties from the given high-level design and specification. The verification can then be carried out on the friendly model to obtain a "golden" reference model. Moreover, as HLS generates both the verification and synthesized structural model, it has the internal signal correspondences between them. This information subsequently can be used by an equivalence checker tool to validate the correctness of structural design (that might have been manually changed to meet design constraints) against the "golden" reference model. We believe that this is also an important step towards removing the skepticism among the designers in regards to verification efforts made on an abstract model and not on the structural model. Currently, we are exploring various high level synthesis heuristics which can be used to generate smarter verification model. In the talk, we would discuss various challenges and issues with such paradigm.
Verification of KPN Level Transformations
2013 26th International Conference on VLSI Design and 2013 12th International Conference on Embedded Systems, 2013
ABSTRACT A verification framework for checking correctness of Kahn process network (KPN) level tr... more ABSTRACT A verification framework for checking correctness of Kahn process network (KPN) level transformations is presented for multimedia and signal processing applications. The initial and the transformed KPN behaviors are both modelled as array data dependence graphs (ADDGs) and the verification problem is posed as checking of equivalence between the two ADDGs. The key aspect of our scheme is to model a KPN behaviour as an ADDG. The verification framework is explained with channel merging transformations. Experimental results supporting usability of this scheme are also provided.

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2014
An equivalence checking method of finite state machines with datapath based on value propagation ... more An equivalence checking method of finite state machines with datapath based on value propagation over model paths is presented here for validation of code motion transformations commonly applied during the scheduling phase of high-level synthesis. Unlike many other reported techniques, the method is able to handle code motions across loop bodies. It consists in propagating the variable values over a path to the subsequent paths on discovery of mismatch in the values for some live variable, until the values match or the final path segments are accounted for without finding a match. Checking loop invariance of the values being propagated beyond the loops has been identified to play an important role. Along with uniform and nonuniform code motions, the method is capable of handling control structure modifications as well. The complexity analysis depicts identical worst case performance as that of a related earlier method of path extension which fails to handle code motion across loops. The method has been implemented and satisfactorily tested on the outputs of a basic block-based scheduler, a path-based scheduler, and the high-level synthesis tool SPARK for some benchmark examples.
Experimentation with SMT solvers and theorem provers for verification of loop and arithmetic transformations
Proceedings of the 5th IBM Collaborative Academia Research Exchange Workshop on - I-CARE '13, 2013
Loop and arithmetic transformations are applied extensively on array and loop intensive behaviour... more Loop and arithmetic transformations are applied extensively on array and loop intensive behaviours while designing area/energy efficient systems in the domain of multimedia and signal processing applications. Ensuring correctness of such transformations is crucial for the reliability of the designed systems. Initially, verification of these transformations using existing SMT solvers, CVC4 and Yices, and a theorem prover, ACL2, is attempted.

Lecture Notes in Computer Science, 2012
Behavioural equivalence checking of the refinements of the input behaviours taking place at vario... more Behavioural equivalence checking of the refinements of the input behaviours taking place at various phases of synthesis of embedded systems or VLSI circuits is a well pursued field. Although extensive literature on equivalence checking of sequential behaviours exists, similar treatments for parallel behaviours are rare mainly because of all the possible execution scenarios inherent in them. Here, we propose a translation algorithm from a parallel behaviour, represented by an untimed PRES+ model, to a sequential behaviour, represented by an FSMD model. Several equivalence checkers for FSMD models already exist for various code based transformation techniques. We have satisfactorily performed equivalence checking of some high level synthesis benchmarks represented by untimed PRES+ models by first translating them into FSMD models using our algorithm and subsequently feeding them to one such FSMD equivalence checker.

ISA Transactions, 2007
In this paper a method for fault detection and diagnosis (FDD) of real time systems has been deve... more In this paper a method for fault detection and diagnosis (FDD) of real time systems has been developed. A modeling framework termed as real time discrete event system (RTDES) model is presented and a mechanism for FDD of the same has been developed. The use of RTDES framework for FDD is an extension of the works reported in the discrete event system (DES) literature, which are based on finite state machines (FSM). FDD of RTDES models are suited for real time systems because of their capability of representing timing faults leading to failures in terms of erroneous delays and deadlines, which FSM-based ones cannot address. The concept of measurement restriction of variables is introduced for RTDES and the consequent equivalence of states and indistinguishability of transitions have been characterized. Faults are modeled in terms of an unmeasurable condition variable in the state map. Diagnosability is defined and the procedure of constructing a diagnoser is provided. A checkable property of the diagnoser is shown to be a necessary and sufficient condition for diagnosability. The methodology is illustrated with an example of a hydraulic cylinder.
Some inference rules for integer arithmetic for verification of flowchart programs on integers
IEEE Transactions on Software Engineering, 1989
... The remaining syntactic entities used in a normal form are defined by means of productions of... more ... The remaining syntactic entities used in a normal form are defined by means of productions of the following gram-mar. Page 3. ... C is reduced to true, if it is detected that II implies ( - 1,). It can be easily yeen that some of the properties of > = , < = and # predicates are ...
A Kleene Algebra of Tagged System Actors for Reasoning about Heterogeneous Embedded Systems
IEEE Transactions on Computers, 2013
ABSTRACT The tagged signal model (TSM) is a formal framework for modeling heterogeneous embedded ... more ABSTRACT The tagged signal model (TSM) is a formal framework for modeling heterogeneous embedded systems. In the present work, we provide a representation of tagged systems using the semantics of Kleene algebra. We further illustrate mechanisms for both behavioral transformational verification through equivalence checking and property verification of heterogeneous embedded systems based on this algebraic representation.
Verification of Loop and Arithmetic Transformations of Array-Intensive Behaviors
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2013
Loop transformation techniques along with arithmetic transformations are applied extensively on a... more Loop transformation techniques along with arithmetic transformations are applied extensively on array and loop intensive behaviors in design of area/energy efficient systems in the domain of multimedia and signal processing applications. Ensuring correctness of such transformations is crucial for the reliability of the designed systems. In this paper, array data dependence graphs (ADDGs) are used to represent both the input

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 1997
Verifying a sequential circuit consists in proving that the given implementation of the circuit s... more Verifying a sequential circuit consists in proving that the given implementation of the circuit satisfies its specification. In the present work the input-output specification of the circuit, which is required to hold for the given implementation, is assumed to be available in the form of a Tempura program segment B B B. It captures the desired ongoing behavior of the circuit in terms of input-output relationships that are expected to hold at various time instants of the interval in question. The implementation is given as a formula W W WS S S of a first-order temporal equality theory, F F F. Goal formulas of the form P P P B B B have been introduced to capture the correctness property of the circuit in question. P P P is a formula of the equality theory E E E contained in F F F and encodes the initial state(s) of the circuit. A goal reduction paradigm has been used to formulate the proof calculus capturing the state transitions produced along the intervals. Formulas, called verification conditions (V C V C V C's), whose validity ensures the correctness of the circuit, are produced corresponding to the output equality statements in B B B. For finite state machines, V C V C V C's are formulas of propositional calculus and, therefore, require no temporal reasoning for their proofs. In fact, since binary decision diagram (BDD BDD BDD) representations are used throughout, their proofs become quite simple. The goal reduction rules proposed for iterative constructs also incorporate synthesis of invariant assertions over the states of the circuit. The proof of a nontrivial example has been presented. The paper concludes with a discussion on a broad overview of the building blocks of the verifier.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2010
This paper proposes a methodology for performance evaluation of schedules for job-shops modeled u... more This paper proposes a methodology for performance evaluation of schedules for job-shops modeled using tag machines. The most general tag structure for capturing dependences is shown to be inadequate for the task. A new tag structure is proposed. Comparison of the method with existing ones reveals that the proposed method has no dependence on schedule length in terms of modeling efficiency and it shares the same order of complexity with existing approaches. The proposed method, however, is shown to bear promise of applicability to other models of computation and hence to heterogeneous system models having such constituent models.
Arxiv preprint arXiv:1010.4953, 2010
Uploads
Papers by Dipankar Sarkar