Automatic detection of gait events has primarily been confined to methods which require a heurist... more Automatic detection of gait events has primarily been confined to methods which require a heuristic or biometric determination of threshold values for each event, which are then stipulated as conditions while defining algorithms. This takes away from the complete automation of the process, since subject-wise threshold calculation must be done manually or using force plate data. Machine learning algorithms and neural network approaches have been proposed, but unsupervised machine learning algorithms (K Means clustering, complete linkage) are yet to be explored and employed. Some algorithms proposed in the last two decades have presented purely automatic identification (Zeni et al., Ghoussayni et al., etc.), but the data used is either gyroscopic or captured using Vicon optical motion cameras. Microsoft Kinect is an inexpensive alternative to Vicon cameras for conducting gait analysis, and provides data for joint kinematics, especially lower limb movement data, which is pertinent to gait event detection. This project has used kinematic data collected on treadmill trials only through 3 Kinect V2s integrated using FusionKit software, and introduces machine learning techniques to perform gait event detection.
Typically, efficiency is quantified by relative 'speed' or how the number of steps needed to comp... more Typically, efficiency is quantified by relative 'speed' or how the number of steps needed to complete an algorithm scales with the size of the 'input' the algorithm is fed. Two ubiquitous 'exponential' problems are searching and factoring: All known algorithms for solving them on conventional computers scale roughly exponentially within put size (e.g., the length of the list to be searched or size of the number to be factored). Discoveries of fast quantum algorithms set new bounds on computational goals and standards. One of the goals of quantum computing research is to understand which problems quantum computers can solve faster than classical (non-quantum) computers and how big the speedup can be. Grover's algorithm and Shor's algorithm are two famous quantum algorithms that yield a polynomial speedup and an exponential speedup, respectively, over their classical counterparts. In this report, I analyze the various necessities and conditions, as well as the quantum advantages that allow quantum algorithms to outperform classical ones for certain defined problems.
Image Captioning is an important area of research that lies at the intersection of computer visio... more Image Captioning is an important area of research that lies at the intersection of computer vision and natural language processing. It is important because it provides insight for humans to better comprehend the perception that machines develop when they decipher an image to generate descriptions of it. It encompasses a machine’s understanding of both: • which features (or pixels) represent which objects in the image, and • what the network should derive about the context in which the objects in the image are represented. The latter is more scientifically termed as the language grounding problem, because an object, while differing in function in different circumstances, should be understood at a common level without revolving around its specifics. Establishing this grounding for machines is what is known as the language grounding problem. If this problem is successfully solved, then it will explain a lot of the decision making process that a machine undergoes to develop an understanding of the scene represented in the image. This holds valid because an object, while being classified, must also be described in relation to the other objects present with it. Objects must also be described in a contextual sense for achieving human-like caption generation, in order to get closer to really artificially intelligent machines. Image captioning methods are further adapted into video captioning, which is a natural extension of the former. In this report, we explore and summarize five of the most relevant and popular algorithms developed by researchers over the years towards generating finer and more accurate image captions.
Automatic detection of gait events has primarily been confined to methods which require a heurist... more Automatic detection of gait events has primarily been confined to methods which require a heuristic or biometric determination of threshold values for each event, which are then stipulated as conditions while defining algorithms. This takes away from the complete automation of the process, since subject-wise threshold calculation must be done manually or using force plate data. Machine learning algorithms and neural network approaches have been proposed, but unsupervised machine learning alogrithms (K Means clustering, complete linkage) are yet to be explored and employed. Some algorithms proposed in the last two decades have presented purely automatic identification (Zeni et al., Ghoussayni et al., etc.), but the data used is either gyroscopic or captured using Vicon optical motion cameras. Microsoft Kinect is an inexpensive alternative to Vicon cameras for conducting gait analysis, and provides data for joint kinematics, especially lower limb movement data, which is pertinent to gait event detection. This project has used kinematic data collected on treadmill trials only through 3 Kinect V2s integrated using FusionKit software, and introduces machine learning techniques to perform gait event detection.
pymc-devs/pymc-examples: Second November 2021 snapshot
This is a snapshot of the repository in November 2021, with many notebooks updated to pymc3 3.11.... more This is a snapshot of the repository in November 2021, with many notebooks updated to pymc3 3.11.x and best practices with this version but no noetbooks using pymc 4.x (whose beta release is near). This is therefore not as much a release as a snapshot at the time as notebooks are updated at their own pace independent of pymc releases and so each snapshot will contain notebooks executed with multiple pymc versions.
This is a snapshot of the repository in January 2022. All but a couple notebooks use PyMC v3. Our... more This is a snapshot of the repository in January 2022. All but a couple notebooks use PyMC v3. Our plan is to have most notebooks using PyMC v4 in a few months by the time we release another snapshot. If you want stable links to the example notebooks using v3 link to this snapshot. Tied to https://github.com/pymc-devs/pymc-sandbox/releases/tag/2022.01.0 which defines the binder env used when clicking on the binder badge. This environment is completely frozen much like the snapshot. For more info on pymc examples and our strategy for releasing snapshots see https://github.com/pymc-devs/pymc-examples/wiki/%22Versioning%22.
Gait Abnormality Detection Using Deep Convolution Network
Advances in Data Mining and Database Management
Human gait analysis plays a significant role in clinical domain for diagnosis of musculoskeletal ... more Human gait analysis plays a significant role in clinical domain for diagnosis of musculoskeletal disorders. It is an extremely challenging task for detecting abnormalities (unsteady gait, stiff gait, etc.) in human walking if the prior information is unknown about the gait pattern. A low-cost Kinect sensor is used to obtain promising results on human skeletal tracking in a convenient manner. A model is created on human skeletal joint positions extracted using Kinect v2 sensor in place using Kinect-based color and depth images. Normal gait and abnormal gait are collected from different persons on treadmill. Each trial of gait is decomposed into cycles. A convolutional neural network (CNN) model was developed on this experimental data for detection of abnormality in walking pattern and compared with state-of-the-art techniques.
Uploads
Drafts by Abhipsha Das
• which features (or pixels) represent which objects in the image, and
• what the network should derive about the context in which the objects in the image are represented.
The latter is more scientifically termed as the language grounding problem, because an object, while differing in function in different circumstances, should be understood at a common level without revolving around its specifics. Establishing this grounding for machines is what is known as the language grounding problem. If this problem is successfully solved, then it will explain a lot of the decision making process that a machine undergoes to develop an understanding of the scene represented in the image. This holds valid because an object, while being classified, must also be described in relation to the other objects present with it. Objects must also be described in a contextual sense for achieving human-like caption generation, in order to get closer to really artificially intelligent machines. Image captioning methods are further adapted into video captioning, which is a natural extension of the former.
In this report, we explore and summarize five of the most relevant and popular algorithms developed by researchers over the years towards generating finer and more accurate image captions.
Papers by Abhipsha Das