DDoSBlocker: Enhancing SDN security with time-based address mapping and AI-driven approach
Dr Kshira Sagar Sahoo, Mitali Sinha., Padmalochan Bera., Manoranjan Satpathy., Joel J P C Rodrigues
Source Title: Computer Networks, DOI Link,
View abstract ⏷
Software Defined Networking (SDN) is vulnerable to Distributed Denial of Service (DDoS) attacks due to its centralized architecture. These attacks involve injecting large numbers of fake packets with spoofed header field information into the controller, leading to network malfunctions. Existing solutions often block both malicious and benign traffic indiscriminately, resulting in a high False Positive Rate. In this paper, we present DDoSBlocker, a lightweight and protocol-independent DDoS defense system designed to identify and block the source points of DDoS attacks without disrupting legitimate traffic. DDoSBlocker combines a time-based address mapping method with a triggering-based machine learning method to accurately identify attack sources. It introduces four novel features: percentage of fake destination IPs, average bytes per packet, percentage of bidirectional flow rules, and percentage of fake flow rules. The system then installs blocking rules at the attack sources, providing immediate mitigation. The model outperforms existing mitigation solutions such as destination point blocking, clustering, and backtracking. Implemented in the Floodlight controller, DDoSBlocker was evaluated under four attack scenarios using different performance metrics. The proposed model, utilizing a random forest classifier, demonstrated 99.71% accuracy, an average detection time of 3 s, an average mitigation time of 0.5 s, and a False Positive Rate of 0.51%
Enhanced Connectivity for Non-Critical In-Vehicle Systems Using EnRF24L01
Dr Kshira Sagar Sahoo, J N V R Swarup Kumar., Kuna Venkateswararao., Umashankar Ghugar., Sourav Kumar Bhoi
Source Title: IEEE Sensors Journal, DOI Link,
View abstract ⏷
There has been a significant paradigm shift from wired to wireless technology in in-vehicular networks. This shift is driven by the need for greater scalability, cost-effectiveness, and flexibility. In the automotive industry, traditional wired protocols such as the Local Interconnect Network (LIN) and Media Oriented Systems Transport (MOST) for non-critical systems add complexity to installation and maintenance, incur higher material costs, and offer limited scalability and mobility. Non-critical systems, such as infotainment and weather forecast systems, do not require low latency and do not impair vehicle function when unavailable. This paper presents an advanced methodology for enhancing connectivity in non-critical in-vehicular networks using Nordic Semiconductors enhanced- nRF24L01 (EnRF24L01) module. The EnRF24L01 module is the nRF24L01 module that incorporated the Sensor-Medium Access Control (S-MAC) algorithm for energy-efficient communication. The proposed method enables seamless communication between non-critical systems using a tree-based master and slave architecture, where the master is the actuator and the slave is the sensor node. To optimize energy efficiency using synchronized sleep/wake schedules, reduce power consumption, and enhance scalability, the S-MAC protocol was incorporated. Comprehensive experiments were conducted in simulated environments using OPNET and Proteus Circuit Simulators, analyzing critical performance metrics: latency, jitter, throughput, packet delivery ratio, and energy efficiency. The results indicate that the proposed method supports a greater number of nodes with enhanced data transmission rates and operates at lower voltages, thereby extending the communication range and reducing overall power consumption. Additionally, hardware simulation results demonstrate the successful integration of EnRF24L01 modules with Arduino for wireless data transmission, showing significant improvements in scalability, energy efficiency, and adaptability, as well as architectural and operational costs and maintenance efficiency
M-SOS: Mobility-Aware Secured Offloading and Scheduling in Dew-Enabled Vehicular Fog of Things
Dr Kshira Sagar Sahoo, Goluguri N V Rajareddy., Kaushik Mishra., Santosh Kumar Majhi., Muhammad Bilal
Source Title: IEEE Transactions on Intelligent Transportation Systems, DOI Link,
View abstract ⏷
The gradual advancement of Internet-connected vehicles has transformed roads and highways into an intelligent ecosystem. This advancement has led to a widespread adoption of vehicular networks, driven by the enhanced capabilities of automobiles. However, managing mobility-aware computations, ensuring network security amidst instability, and overcoming resource constraints pose significant challenges in heterogeneous vehicular network applications within Fog computing. Moreover, the latency overhead remains a critical issue for tasks sensitive to latency and deadlines. The objective of this research is to develop a Mobility-aware Secured offloading and Scheduling (M-SOS) technique for a Dew-enabled vehicular Fog-Cloud computing system. This technique aims to address the issues outlined above by moving the computations closer to the edge of the network. Initially, a Dew-facilitated vehicular Fog network is proposed, leveraging heterogeneous computing nodes to handle diverse vehicular requests efficiently and ensuring uninterrupted services within the vehicular network. Further, task management is optimized using a Fuzzy logic that categorizes tasks based on their specific requirements and identifies the target layers for offloading. Besides, a cryptographic algorithm known as SHA-256 RSA enhances security. Moreover, a novel Linear Weight-based JAYA scheduling algorithm is introduced to assign tasks to appropriate computing nodes. The proposed algorithm surpasses the comparable algorithms by 23% in terms of AWT, 18% in terms of latency rate, 14% and 23% in terms of meeting the hard-deadline (H_d) and soft-deadline (S_d), and 35% in terms of average system cost, respectively
A digital twin-enabled fog-edge-assisted IoAT framework for Oryza Sativa disease identification and classification
Dr Kshira Sagar Sahoo, Goluguri N V Rajareddy., Kaushik Mishra., Satish Kumar Satti., Gurpreet Singh Chhabra.,Amir H Gandomi
Source Title: Ecological Informatics, DOI Link,
View abstract ⏷
The integration of agri-technology with the Internet of Agricultural Things (IoAT) is revolutionizing the field of smart agriculture, particularly in diagnosing and treating Oryza sativa (rice) diseases. Given that rice serves as a staple food for over half of the global population, ensuring its healthy cultivation is crucial, particularly with the growing global population. Accurate and timely identification of rice diseases, such as Brown Leaf Spot (BS), Bacterial Leaf Blight (BLB), and Leaf Blast (LB), is therefore essential to maintaining and enhancing rice production. In response to this critical need, the research introduces a timely detection system that leverages the power of Digital Twin (DT)-enabled Fog computing, integrated with Edge and Cloud Computing (CC), and supported by sensors and advanced technologies. At the heart of this system lies a sophisticated deep-learning model built on the robust AlexNet neural network architecture. This model is further refined by including Quaternion convolution layers, which enhance colour information processing, and Atrous convolution layers, which improve depth perception, particularly in extracting disease patterns. To boost the model's predictive accuracy, the Chaotic Honey Badger Algorithm (CHBA) is employed to optimize the CNN hyperparameters, resulting in an impressive average accuracy of 93.5 %. This performance significantly surpasses that of other models, including AlexNet, AlexNet-Atrous, QAlexNet, and QAlexNet-Atrous, which achieved respective accuracies of 75 %, 84 %, 89 %, and 91 %. Moreover, the CHBA optimization algorithm outperforms other techniques like CSO, BSO, PSO, and CJAYA and demonstrates optimal results with an 8020 % training-testing parameter split. Service latency analysis further reveals that the Fog-Edge-assisted environment is more efficient than the Cloud-assisted model for latency reduction. Additionally, the DT-enabled QAlexNet-Atrous-CHBA model proves to be far superior to its non-DT counterpart, showing substantial improvements in 18.7 % in Accuracy, 17 % in recall, 19 % in F?-measure, 17.3 % in specificity, and 13.4 % in precision, respectively. These enhancements are supported by convergence analysis and the Quade rank test, establishing the model's effectiveness and potential to significantly improve rice disease diagnosis and management. This advancement promises to contribute significantly to the sustainability and productivity of global rice cultivation
Unveiling Sybil Attacks Using AI-Driven Techniques in Software-Defined Vehicular Networks
Dr Kshira Sagar Sahoo, Rajendra Prasad Nayak., Sourav Kumar Bhoi.,Srinivas Sethi., Subasish Mohapatra., Monowar Bhuyan
Source Title: Security and Privacy, DOI Link,
View abstract ⏷
The centralized nature of software?defined networks (SDN) makes them a suitable choice for vehicular networks. This enables numerous vehicles to communicate within an SD?vehicular network (SDVN) through vehicle?to?vehicle (V2V) and with road?side units (RSUs) via vehicle?to?infrastructure (V2I) connections. The increased traffic volume necessitates robust security solutions, particularly for Sybil attacks. Here, the attacker aims to undermine network trust by gaining unauthorized access or manipulating network communication. While traditional cryptography?based security methods are effective, their encryption and decryption processes may cause excess delays in vehicular scenarios. Previous studies have suggested machine learning (ML) like AI?driven approaches for Sybil attack detection in vehicular networks. However, the primary drawbacks are high detection time and feature engineering of network data. To overcome these issues, we propose a two?phase detection framework, in which the first phase utilizes cosine similarity and weighting factors to identify attack misbehavior in vehicles. These metrics contribute to the calculation of effective node trust (ENT), which helps in further attack detection. In the second phase, deep learning (DL) models such as CNN and LSTM are employed for further granular classification of misbehaving vehicles into normal, fault, or Sybil attack vehicles. Due to the time series nature of vehicle data, CNN and LSTM are used. The methodology deployed at the controller provides a comprehensive analysis, offering a single? to multi?stage classification scheme. The classifier identifies six distinct vehicle types associated with such attacks. The proposed schemes demonstrate superior accuracy with an average of 94.49% to 99.94%, surpassing the performance of existing methods
Enhanced Biometric Template Protection Schemes for Securing Face Recognition in IoT Environment
Dr Kshira Sagar Sahoo, A H Gandomi., Alamgir Sardar., Saiyed Umer., Ranjeet Kumar Rout.,
Source Title: IEEE Internet of Things Journal, DOI Link,
View abstract ⏷
With the increasing use of biometrics in Internet of Things (IoT)-based applications, it is essential to ensure that biometric-based authentication systems are secure. Biometric characteristics can be accessed by anyone, which poses a risk of unauthorized access to the system through spoofed biometric traits. Therefore, it is important to implement secure and efficient security schemes suitable for real-life applications, less computationally intensive, and invulnerable. This work presents a hybrid template protection scheme for secure face recognition in IoT-based environments, which integrates Cancelable Biometrics and Bio-Cryptography. Mainly, the proposed system involves two steps: 1) face recognition and 2) face biometric template protection. The face recognition includes face image preprocessing by the tree structure part model (TSPM), feature extraction by ensemble patch statistics (EPS) technique, and user classification by multiclass linear support vector machine (SVM). The template protection scheme includes cancelable biometric generation by modified FaceHashing and a Sliding-XOR (called S-XOR)-based novel Bio-Cryptographic technique. A user biometric-based key generation technique has been introduced for the employed Bio-Cryptography. Three benchmark facial databases, CVL, FEI, and FERET, have been used for the performance evaluation and security analysis. The proposed system achieves better accuracy for all the databases of 200-D cancelable feature vectors computed from the 500-D original feature vector. The modified FaceHashing and S-XOR method shows superiority over existing face recognition systems and template protection.
Enhancing DDoS detection in SDIoT through effective feature selection with SMOTE-ENN
Source Title: PLoS ONE, DOI Link,
View abstract ⏷
Internet of things (IoT) facilitates a variety of heterogeneous devices to be enabled with network connectivity via various network architectures to gather and exchange real-time information. On the other hand, the rise of IoT creates Distributed Denial of Services (DDoS) like security threats. The recent advancement of Software Defined-Internet of Things (SDIoT) architecture can provide better security solutions compared to the conventional networking approaches. Moreover, limited computing resources and heterogeneous network protocols are major challenges in the SDIoT ecosystem. Given these circumstances, it is essential to design a low-cost DDoS attack classifier. The current study aims to employ an improved feature selection (FS) technique which determines the most relevant features that can improve the detection rate and reduce the training time. At first, to overcome the data imbalance problem, Edited Nearest Neighbor-based Synthetic Minority Oversampling (SMOTE-ENN) was exploited. The study proposes SFMI, an FS method that combines Sequential Feature Selection (SFE) and Mutual Information (MI) techniques. The top k common features were extracted from the nominated features based on SFE and MI. Further, Principal component analysis (PCA) is employed to address multicollinearity issues in the dataset. Comprehensive experiments have been conducted on two benchmark datasets such as the KDDCup99, CIC IoT-2023 datasets. For classification purposes, Decision Tree, K-Nearest Neighbor, Gaussian Naive Bayes, Random Forest (RF), and Multilayer Perceptron classifiers were employed. The experimental results quantitatively demonstrate that the proposed SMOTE-ENN+SFMI+PCA with RF classifier achieves 99.97% accuracy and 99.39% precision with 10 features.
A combination learning framework to uncover cyber attacks in IoT networks
Source Title: Internet of Things (Netherlands), DOI Link,
View abstract ⏷
The Internet of Things (IoT) is rapidly expanding, connecting an increasing number of devices daily. Having diverse and extensive networking and resource-constrained devices creates vulnerabilities to various cyber-attacks. The IoT with the supervision of Software Defined Network (SDN) enhances the network performance through its flexibility and adaptability. Different methods have been employed for detecting security attacks; however, they are often computationally efficient and unsuitable for such resource-constraint environments. Consequently, there is a significant requirement to develop efficient security measures against a range of attacks. Recent advancements in deep learning (DL) models have paved the way for designing effective attack detection methods. In this study, we leverage Genetic Algorithm (GA) with a correlation coefficient as a fitness function for feature selection. Additionally, mutual information (MI) is applied for feature ranking to measure their dependency on the target variable. The selected optimal features were used to train a hybrid DNN model to uncover attacks in IoT networks. The hybrid DNN integrates Convolutional Neural Network, Bi-Gated Recurrent Units (Bi-GRU), and Bidirectional Long Short-Term Memory (Bi-LSTM) for training the input data. The performance of our proposed model is evaluated against several other baseline DL models, and an ablation study is provided. Three key datasets InSDN, UNSW-NB15, and CICIoT 2023 datasets, containing various types of attacks, were used to assess the performance of the model. The proposed model demonstrates an impressive accuracy and detection time over the existing model with lower resource consumption.
Towards Designing an Energy Efficient Accelerated Sparse Convolutional Neural Network
Dr Kshira Sagar Sahoo, Vijaypal Singh Rathor., Munesh Singh., Rahul Gupta., G K Sharma., Monowar Bhuyan
Source Title: 2024 IEEE 36th International Conference on Tools with Artificial Intelligence (ICTAI), DOI Link,
View abstract ⏷
Among other deep learning (DL) architectures, the convolutional neural network (CNN) has wide applications in speech recognition, face detection, natural language processing, and computer vision. Multiply and Accumulate (MAC) unit is a core part of CNN and requires large computations and memory resources. They result in more power dissipation for low-power embedded devices. Hence, the hardware implementation of CNN to produce high throughput is one of the challenges nowadays. Therefore, sparsity is introduced in weights by a non-linear method with a minor compromise in accuracy. Experimental results also show the enhancement of 52% sparsity with a 4% loss in accuracy. In addition, an indexing module is proposed to perform Single Instruction Multiple Data (SIMD) operations in the fully connected layer to perform only effective operations without multiplication. This module is used along with sparsity to offer better results as compared to SOTA methods. Cadence RTL compiler results show that the proposed indexing module saves 1.3 nJ of energy as compared to the existing methods
Secured and Privacy-Preserving Multi-Authority Access Control System for Cloud-Based Healthcare Data Sharing
Dr Kshira Sagar Sahoo, Nabil Sharaf Almalki., N Z Jhanjhi., Reetu Gupta., Mohammed A Alzain
Source Title: Sensors, DOI Link,
View abstract ⏷
With continuous advancements in Internet technology and the increased use of cryptographic techniques, the cloud has become the obvious choice for data sharing. Generally, the data are outsourced to cloud storage servers in encrypted form. Access control methods can be used on encrypted outsourced data to facilitate and regulate access. Multi-authority attribute-based encryption is a propitious technique to control who can access encrypted data in inter-domain applications such as sharing data between organizations, sharing data in healthcare, etc. The data owner may require the flexibility to share the data with known and unknown users. The known or closed-domain users may be internal employees of the organization, and unknown or open-domain users may be outside agencies, third-party users, etc. In the case of closed-domain users, the data owner becomes the key issuing authority, and in the case of open-domain users, various established attribute authorities perform the task of key issuance. Privacy preservation is also a crucial requirement in cloud-based data-sharing systems. This work proposes the SP-MAACS scheme, a secure and privacy-preserving multi-authority access control system for cloud-based healthcare data sharing. Both open and closed domain users are considered, and policy privacy is ensured by only disclosing the names of policy attributes. The values of the attributes are kept hidden. Characteristic comparison with similar existing schemes shows that our scheme simultaneously provides features such as multi-authority setting, expressive and flexible access policy structure, privacy preservation, and scalability. The performance analysis carried out by us shows that the decryption cost is reasonable enough. Furthermore, the scheme is demonstrated to be adaptively secure under the standard model.
A learning automata based edge resource allocation approach for IoT-enabled smart cities
Dr Kshira Sagar Sahoo, Sampa Sahoo., Bibhudatta Sahoo., Amir H Gandomi
Source Title: Digital Communications and Networks, DOI Link,
View abstract ⏷
The development of the Internet of Things (IoT) technology is leading to a new era of smart applications such as smart transportation, buildings, and smart homes. Moreover, these applications act as the building blocks of IoT-enabled smart cities. The high volume and high velocity of data generated by various smart city applications are sent to flexible and efficient cloud computing resources for processing. However, there is a high computation latency due to the presence of a remote cloud server. Edge computing, which brings the computation close to the data source is introduced to overcome this problem. In an IoT-enabled smart city environment, one of the main concerns is to consume the least amount of energy while executing tasks that satisfy the delay constraint. An efficient resource allocation at the edge is helpful to address this issue. In this paper, an energy and delay minimization problem in a smart city environment is formulated as a bi-objective edge resource allocation problem. First, we presented a three-layer network architecture for IoT-enabled smart cities. Then, we designed a learning automata-based edge resource allocation approach considering the three-layer network architecture to solve the said bi-objective minimization problem. Learning Automata (LA) is a reinforcement-based adaptive decision-maker that helps to find the best task and edge resource mapping. An extensive set of simulations is performed to demonstrate the applicability and effectiveness of the LA-based approach in the IoT-enabled smart city environment.
5G-Enabled Secure IoT Applications in Smart Cities Using Software-Defined Networks
Source Title: Handbook of Research on Network-Enabled IoT Applications for Smart City Services, DOI Link,
View abstract ⏷
With the idea of shifting towards a smart future there is a lot of research being done in the area of internet of things (IoT) and wireless communication, especially 5G network technology. These technologies are instrumenting society towards a world of high connectivity, through secure evolutionary telecommunication methodologies. In this chapter we understand the role of 5G networks in enhancing IoT devices and discuss their security aspects. Integration of IoT and software defined network termed as SDIoT enables automatic traffic rerouting, device reconfiguration, and bandwidth allocation seamlessly. Smart cities utilize the SDIoT integrated with 5G to gather real-time data, better understand how demand patterns are changing, and respond with quicker and more affordable solutions. The authors try to understand the existing research scenario in 5G networks and IoT, and what areas are being taken into consideration for improvement in the coming future. Copyright
Cost Minimization of Airline Crew Scheduling Problem Using Assignment Technique
Dr Kshira Sagar Sahoo, Chittaranjan Mallick., Sourav Kumar Bhoi., Trailokyanath Singh., Khalid Hussain., Basheer Rikshan
Source Title: International Journal of Intelligent Systems and Applications in Engineering, DOI Link,
View abstract ⏷
-
Transportation Problem Solver for Drug Delivery in Pharmaceutical Companies using Steppingstone Method
Dr Kshira Sagar Sahoo, Trailokyanath Singh., Prachi Swain., Basheer Ruskhan., Khalid Hussain., Chittaranjan Mallick., Sourav Kumar Bhoi
Source Title: International Journal of Intelligent Systems and Applications in Engineering, DOI Link,
View abstract ⏷
-
A Distributed Fuzzy Optimal Decision Making Strategy for Task Offloading in Edge Computing Environment
Dr Kshira Sagar Sahoo, Daehan Kwak., Sasmita Rani Behera., Niranjan Panigrahi., Muhammad Bilal
Source Title: IEEE Access, DOI Link,
View abstract ⏷
With the technological evolution of mobile devices, 5G and 6G communication and users' demand for new generation applications viz. face recognition, image processing, augmented reality, etc., has accelerated the new computing paradigm of Mobile Edge Computing (MEC). It operates in close proximity to users by facilitating the execution of computational-intensive tasks from devices through offloading. However, the offloading decision at the device level faces many challenges due to uncertainty in various profiling parameters in modern communication technologies. Further, with the increase in the number of profiling parameters, the fuzzy-based approaches suffer inference searching overheads. In this context, a fuzzy-based approach with an optimal inference strategy is proposed to make a suitable offloading decision. The proposed approach utilizes the Classification and Regression Tree (CART) mechanism at the inference engine with reduced time complexity of O (|V|2log2| L|)), as compared to O (| L ||V|) of state-of-the-art, conventional fuzzy-based offloading approaches, and has been proved to be more efficient. The performance of the proposed approach is evaluated and compared with contemporary offloading algorithms in a python-based fog and edge simulator, YAFS. The simulation results show a reduction in average task processing time, average task completion time, energy consumption, improved server utilization, and tolerance to latency and delay sensitivity for the offloaded tasks in terms of reduced task failure rates.
Time Series-Based Edge Resource Prediction and Parallel Optimal Task Allocation in Mobile Edge Computing Environment
Source Title: Processes, DOI Link,
View abstract ⏷
The offloading of computationally intensive tasks to edge servers is indispensable in the mobile edge computing (MEC) environment. Once the tasks are offloaded, the subsequent challenges lie in buffering them and assigning them to edge virtual machine (VM) resources to meet the multicriteria requirement. Furthermore, the edge resources availability is dynamic in nature and needs a joint prediction and optimal allocation for the efficient usage of resources and fulfillment of the tasks requirements. To this end, this work has three contributions. First, a delay sensitivity-based priority scheduling (DSPS) policy is presented to schedule the tasks as per their deadline. Secondly, based on exploratory data analysis and inferred seasonal patterns in the usage of edge CPU resources from the GWA-T-12 Bitbrains VM utilization dataset, the availability of VM resources is predicted by using a HoltWinters-based univariate algorithm (HWVMR) and a vector autoregression-based multivariate algorithm (VARVMR). Finally, for optimal and fast task assignment, a parallel differential evolution-based task allocation (pDETA) strategy is proposed. The proposed algorithms are evaluated extensively with standard performance metrics, and the results show nearly 22%, 35%, and 69% improvements in cost and 41%, 52%, and 78% improvements in energy when compared with MTSS, DE, and minmin strategies, respectively.
Adaptive Congestion Control Mechanism to Enhance TCP Performance in Cooperative IoV
Source Title: IEEE Access, DOI Link,
View abstract ⏷
One of the main causes of energy consumption in Internet of Vehicles (IoV) networks is an ill-designed network congestion control protocol, which results in numerous packet drops, lower throughput, and increased packet retransmissions. In IoV network, the objective to increase network throughput can be achieved by minimizing packets re- transmission and optimizing bandwidth utilization. It has been observed that the congestion control mechanism (i.e., the congestion window) can plays a vital role in mitigating the aforementioned challenges. Thus, this paper present a cross-layer technique to controlling congestion in an IoV network based on throughput and buffer use. In the proposed approach, the receiver appends two bits in the acknowledgment (ACK) packet that describes the status of the buffer space and link utilization. The sender then uses this information to monitor congestion and limit the transmission of packets from the sender. The proposed model has been experimented extensively and the results demonstrate a significantly higher network performance percentage in terms of buffer utilization, link utilization, throughput, and packet loss.
Sentiment Analysis with Tweets Behaviour in Twitter Streaming API
Dr Kshira Sagar Sahoo, Nz Jhanjhi., Kuldeep Chouhan., Mukesh Yadav., Ranjeet Kumar Rout., Mehedi Masud., Sultan Aljahdali
Source Title: Computer Systems Science and Engineering, DOI Link,
View abstract ⏷
Twitter is a radiant platform with a quick and effective technique to analyze users' perceptions of activities on social media. Many researchers and industry experts show their attention to Twitter sentiment analysis to recognize the stakeholder group. The sentiment analysis needs an advanced level of approaches including adoption to encompass data sentiment analysis and various machine learning tools. An assessment of sentiment analysis in multiple fields that affect their elevations among the people in real-time by using Naive Bayes and Support Vector Machine (SVM). This paper focused on analysing the distinguished sentiment techniques in tweets behaviour datasets for various spheres such as healthcare, behaviour estimation, etc. In addition, the results in this work explore and validate the statistical machine learning classifiers that provide the accuracy percentages attained in terms of positive, negative and neutral tweets. In this work, we obligated Twitter Application Programming Interface (API) account and programmed in python for sentiment analysis approach for the computational measure of user's perceptions that extract a massive number of tweets and provide market value to the Twitter account proprietor. To distinguish the results in terms of the performance evaluation, an error analysis investigates the features of various stakeholders comprising social media analytics researchers, Natural Language Processing (NLP) developers, engineering managers and experts involved to have a decision-making approach.
A Pattern Classification Model for Vowel Data Using Fuzzy Nearest Neighbor
Dr Kshira Sagar Sahoo, Monika Khandelwal., Ranjeet Kumar Rout., Mohammad Shorfuzzaman., Mehedi Masud., Saiyed Umer., Nz Jhanjhi
Source Title: Intelligent Automation and Soft Computing, DOI Link,
View abstract ⏷
Classification of the patterns is a crucial structure of research and applications. Using fuzzy set theory, classifying the patterns has become of great interest because of its ability to understand the parameters. One of the problems observed in the fuzzification of an unknown pattern is that importance is given only to the known patterns but not to their features. In contrast, features of the patterns play an essential role when their respective patterns overlap. In this paper, an optimal fuzzy nearest neighbor model has been introduced in which a fuzzification process has been carried out for the unknown pattern using k nearest neighbor. With the help of the fuzzification process, the membership matrix has been formed. In this membership matrix, fuzzification has been carried out of the features of the unknown pattern. Classification results are verified on a completely llabelled Telugu vowel data set, and the accuracy is compared with the different models and the fuzzy k nearest neighbor algorithm. The proposed model gives 84.86% accuracy on 50% training data set and 89.35% accuracy on 80% training data set. The proposed classifier learns well enough with a small amount of training data, resulting in an efficient and faster approach.
A fuzzy rule based machine intelligence model for cherry red spot disease detection of human eyes in IoMT
Dr Kshira Sagar Sahoo, Anand Nayyar., Kalyan Kumar Jena., Sourav Kumar Bhoi., Debasis Mohapatra., Chittaranjan Mallick
Source Title: Wireless Networks, DOI Link,
View abstract ⏷
Internet of medical things (IoMT) plays an important role nowadays to support healthcare system. The hospital equipments called as medical things are now connected to the cloud for getting many useful services. The data generated from the equipments are sent to the cloud for getting the desired service. In current scenario, most hospitals collect many images using equipments, but these equipments have less computational capability to process the huge generated data. In this work, one such equipment is considered which can take the human eye images and send the images to the cloud for detection of cherry red spot (CRS). CRS disease in eyes is considered as one of the very dangerous disease. The early diagnosis of CRS disease needs to be focused in order to avoid any adverse effect on human body. In this paper, a machine intelligence based model is proposed to detect the CRS disease areas in the human eyes by analyzing several CRS disease images using IoMT. The proposed approach is mainly focused on fuzzy rule-based mechanism to carry out the identification of such affected area in the eyes in cloud layer. From the results, it is observed that the CRS disease areas in the eyes are detected well with better detection accuracy and lower detection error than k-means algorithm. This approach will help the doctors to track the exact position of the affected areas in the eye for its diagnosis. The simulation is performed using socket programming written in Python 3 where a cloud server and a client device are created and images are sent from the client device to the server, and afterwards the detection of CRS is performed at the server using MATLAB R2015b. The proposed method is able to provide better performance in terms of detection accuracy, detection error and processing time as 94.67%, 5.33% and 1.1481% units respectively on an average case scenario.
FSE2R: An Improved Collision-Avoidance-based Energy Efficient Route Selection Protocol in USN
Dr Kshira Sagar Sahoo, Madhumita Panda., N Z Jhanjh., Prasant Ku Dash., Lopamudra Hota., Mehedi Masud
Source Title: Computer Systems Science and Engineering, DOI Link,
View abstract ⏷
The 3D Underwater Sensor Network (USNs) has become the most optimistic medium for tracking and monitoring underwater environment. Energy and collision are two most critical factors in USNs for both sparse and dense regions. Due to harsh ocean environment, it is a challenge to design a reliable energy efficient with collision free protocol. Diversity in link qualities may cause collision and frequent communication lead to energy loss; that effects the network performance. To overcome these challenges a novel protocol Forwarder Selection Energy Efficient Routing (FSE2R) is proposed. Our proposal's key idea is based on computation of node distance from the sink, Residual Energy (RE) of each node and Signal to Interference Noise Ratio (SINR). The node distance from sink and RE is computed for reliable forwarder node selection and SINR is used for analysis of collision. The novel proposal compares with existing protocols like H2AB, DEEP, and E2LR to achieve Quality of Service (QoS) in terms of throughput, packet delivery ratio and energy consumption. The comparative analysis shows that FSE2R gives on an average 30% less energy consumption, 24.62% better PDR and 48.31% less end-to-end delay compared to other protocols.
ML-MDS: Machine Learning based Misbehavior Detection System for Cognitive Software-defined Multimedia VANETs (CSDMV) in smart cities
Dr Kshira Sagar Sahoo, Rajendra Prasad Nayak., Srinivas Sethi., Sourav Kumar Bhoi., Anand Nayyar
Source Title: Multimedia Tools and Applications, DOI Link,
View abstract ⏷
Security is a major concern in vehicular networks for reliable communication between the source and the destination in smart cities. Data, these days, is in the form of safety or non-safety messages in formats like text, audio, images, video, etc. These information exchanges between the two parties need to be updated with a trust value (TV) by analyzing the communication data. In this paper, a machine learning-based misbehavior detection system (ML-MDS) is proposed for cognitive software-defined multimedia vehicular networks (CSDMV) in smart cities. In the proposed system, before communication, the vehicle must be aware of the TV of other vehicles. If the TV for a vehicle is higher than a threshold (th), then the communication happens and the whole transaction information is sent to the local software-defined network controller (LSDNC) for classification of behavior using the ML algorithm. After this, the TV is updated as per the last transaction status at LSDNC and the updated TV of the vehicle is sent to the main SDN controller for information gathering. In this system, the best ML algorithm for the ML-MDS model is selected by considering decision tree, support vector machine (SVM), neural network (NN), and logistic regression (LR) algorithms. The classification accuracy performance is evaluated using UNSW_NB-15 standard dataset for detecting the normal and malicious vehicles. NN shows better classification accuracy than other algorithms. The proposed ML-MDS is implemented and evaluated using OMNeT++ network simulator and the Simulation of Urban Mobility (SUMO) road traffic simulator by considering various parameters such as detection accuracy, detection time, and energy consumption. From the results, it is observed that the detection accuracy of proposed ML-MDS system is 98.4% as compared to Grover et al. scheme which was 80.2%. Also, for scalability issue the dataset size is increased and performance is evaluated in Orange 3.26.0 machine analytics tool and NN is found to be the best algorithm which shows high accuracy in detecting the attackers.
An Improved Machine Learning Framework for Cardiovascular Disease Prediction
Source Title: Communications in Computer and Information Science, DOI Link,
View abstract ⏷
Cardiovascular diseases have the highest fatality rate among the worlds most deadly syndromes. They have become stress, age, gender, cholesterol, Body Mass Index, physical inactivity, and an unhealthy diet are all key risk factors for cardiovascular disease. Based on these parameters, researchers have suggested various early diagnosis methods. However, the correctness of the supplied treatments and approaches needs considerable fine-tuning due to the cardiovascular illnesses intrinsic criticality and life-threatening hazards. This paper proposes a framework for accurate cardiovascular disorder prediction based on machine learning techniques. To attain the purpose, the method employs an approach called synthetic minority over-sampling (SMOTE). The benchmark datasets are used to validate the framework for achieving better accuracy, such as Recall and Accuracy. Finally, a comparison has been presented with existing state-of-the-art approaches that shows 99.16% accuracy by a collaborative model by logistic regression and KNN.
AutoDBaaS: Autonomous database as a service for managing backing services
Source Title: Advances in Database Technology - EDBT, DOI Link,
View abstract ⏷
-
Analysis of Breath-Holding Capacity for Improving Efficiency of COPD Severity-Detection Using Deep Transfer Learning
Dr Kshira Sagar Sahoo, Narendra Kumar Rout., Jhanjhi N Z., Nirjharinee Parida., Ranjeet Kumar Rout.,Mohammed A Alzain
Source Title: Applied Sciences, DOI Link,
View abstract ⏷
Air collection around the lung regions can cause lungs to collapse. Conditions like emphysema can cause chronic obstructive pulmonary disease (COPD), wherein lungs get progressively damaged, and the damage cannot be reversed by treatment. It is recommended that these conditions be detected early via highly complex image processing models applied to chest X-rays so that the patients life may be extended. Due to COPD, the bronchioles are narrowed and blocked with mucous, and causes destruction of alveolar geometry. These changes can be visually monitored via feature analysis using effective image classification models such as convolutional neural networks (CNN). CNNs have proven to possess more than 95% accuracy for detection of COPD conditions for static datasets. For consistent performance of CNNs, this paper presents an incremental learning mechanism that uses deep transfer learning for incrementally updating classification weights in the system. The proposed model is tested on 3 different lung X-ray datasets, and an accuracy of 99.95% is achieved for detection of COPD. In this paper, a model for temporal analysis of COPD detected imagery is proposed. This model uses Gated Recurrent Units (GRUs) for evaluating lifespan of patients with COPD. Analysis of lifespan can assist doctors and other medical practitioners to take recommended steps for aggressive treatment. A smaller dataset was available to perform temporal analysis of COPD values because patients are not advised continuous chest X-rays due to their long-term side effects, which resulted in an accuracy of 97% for lifespan analysis.
Core-based Approach to Measure Pairwise Layer Similarity in Multiplex Network
Dr Kshira Sagar Sahoo, N Z Jhanjhi., Debasis Mohapatra., Sourav Kumar Bhoi., Kalyan Kumar Jena., Chittaranjan Mallick., Mehedi Masud
Source Title: Intelligent Automation and Soft Computing, DOI Link,
View abstract ⏷
Most of the recent works on network science are focused on investigating various interactions among a set of entities present in a system that can be represented by multiplex network. Each type of relationship is treated as a layer of multiplex network. Some of the recent works on multiplex networks are focused on deriving layer similarity from node similarity where node similarity is evaluated using neighborhood similarity measures like cosine similarity and Jaccard similarity. But this type of analysis lacks in finding the set of nodes having the same influence in both the network. The discovery of influence similarity between the layers of multiplex networks helps in strategizing cascade effect, influence maximization, network controllability, etc. Towards this end, this paper proposes a pairwise similarity evaluation of layers based on a set of common core nodes of the layers. It considers the number of nodes present in the common core set, the average clustering coefficient of the common core set, and fractional influence capacity of the common core set to quantify layer similarity. The experiment is carried out on three real multiplex networks. As the proposed notion of similarity uses a different aspect of layer similarity than the existing one, a low positive correlation (close to non-correlation) is found between the proposed and existing approach of layer similarity. The result demonstrates that the degree of coreness difference is less for the datasets in the proposed method than the existing one. The existing method reports the coreness difference to be 40% and 18.4% for the datasets CS-AARHUS and EU-AIR TRANSPORTATION MULTIPLEX respectively whereas it is found to be 20% and 8.1% using proposed approach.
Visualization of Audio Files Using Librosa
Dr Kshira Sagar Sahoo, Chandramouli Das., N Z Jhanjhi., Shubham Suman., Ambik Mitra
Source Title: Advances in Intelligent Systems and Computing, DOI Link,
View abstract ⏷
-
Sustainable IoT Solution for Freshwater Aquaculture Management
Source Title: IEEE Sensors Journal, DOI Link,
View abstract ⏷
In recent years, we have seen the impact of global warming on changing weather patterns. The changing weather patterns have shown a significant effect on the annual rainfall. Due to the lack of annual rainfall, developing countries like India have seen a substantial loss in annual crop production. Indian economy largely depends on agro products. To compensate for the economic loss, the Indian government encouraged the farmers to do integrated aquaculture-based farming. Despite government subsidies and training programs, most farmers find it difficult to succeed in aquaculture-based farming. Aquaculture farming needs skills to maintain and monitor underwater environments. The lack of skills for monitoring and maintenance makes the aquaculture business more difficult for farmers. To simplify the pearl farming aquaculture, we have proposed an Internet of Things (IoT)-based intelligent monitoring and maintenance system. The proposed system monitors the water quality and maintains an adequate underwater environment for better production. To maintain an aquaculture environment, we have forecasted the change in water parameters using an ensemble learning method based on random forests (RF). The performance of the RF model compared with the linear regression (LR), support vector regression (SVR), and gradient boosting machine (GBM). The obtained results show that the RF model outperformed the forecast of the DO with 1.428 mean absolute error (MAE) and pH with 0.141 MAE.
Improved Procedure for Multi-Focus Images Using Image Fusion with qshiftN DTCWT and MPCA in Laplacian Pyramid Domain
Dr Kshira Sagar Sahoo, Noor Zaman Jhanjhi., Ashraf Osman Ibrahim., Chinnem Rama Mohan., Kuldeep Chouhan., Ranjeet Kumar Rout.,Abdelzahir Abdelmaboud
Source Title: Applied Sciences (Switzerland), DOI Link,
View abstract ⏷
Multi-focus image fusion (MIF) uses fusion rules to combine two or more images of the same scene with various focus values into a fully focused image. An all-in-focus image refers to a fully focused image that is more informative and useful for visual perception. A fused image with high quality is essential for maintaining shift-invariant and directional selectivity characteristics of the image. Traditional wavelet-based fusion methods, in turn, create ringing distortions in the fused image due to a lack of directional selectivity and shift-invariance. In this paper, a classical MIF system based on quarter shift dual-tree complex wavelet transform (qshiftN DTCWT) and modified principal component analysis (MPCA) in the laplacian pyramid (LP) domain is proposed to extract the focused image from multiple source images. In the proposed fusion approach, the LP first decomposes the multi-focus source images into low-frequency (LF) components and high-frequency (HF) components. Then, qshiftN DTCWT is used to fuse low and high-frequency components to produce a fused image. Finally, to improve the effectiveness of the qshiftN DTCWT and LP-based method, the MPCA algorithm is utilized to generate an all-in-focus image. Due to its directionality, and its shift-invariance, this transform can provide high-quality information in a fused image. Experimental results demonstrate that the proposed method outperforms many state-of-the-art techniques in terms of visual and quantitative evaluations.
A Whale Optimization Algorithm Based Resource Allocation Scheme for Cloud-Fog Based IoT Applications
Dr Kshira Sagar Sahoo, Nz Jhanjhi., Ranumayee Sing., Sourav Kumar Bhoi., Niranjan Panigrahi.,Mohammed A Alzain
Source Title: Electronics, DOI Link,
View abstract ⏷
Fog computing has been prioritized over cloud computing in terms of latency-sensitive Internet of Things (IoT) based services. We consider a limited resource-based fog system where real-time tasks with heterogeneous resource configurations are required to allocate within the execution deadline. Two modules are designed to handle the real-time continuous streaming tasks. The first module is task classification and buffering (TCB), which classifies the task heterogeneity using dynamic fuzzy c-means clustering and buffers into parallel virtual queues according to enhanced least laxity time. The second module is task offloading and optimal resource allocation (TOORA), which decides to offload the task either to cloud or fog and also optimally assigns the resources of fog nodes using the whale optimization algorithm, which provides high throughput. The simulation results of our proposed algorithm, called whale optimized resource allocation (WORA), is compared with results of other models, such as shortest job first (SJF), multi-objective monotone increasing sorting-based (MOMIS) algorithm, and Fuzzy Logic based Real-time Task Scheduling (FLRTS) algorithm. When 100 to 700 tasks are executed in 15 fog nodes, the results show that the WORA algorithm saves 10.3% of the average cost of MOMIS and 21.9% of the average cost of FLRTS. When comparing the energy consumption, WORA consumes 18.5% less than MOMIS and 30.8% less than FLRTS. The WORA also performed 6.4% better than MOMIS and 12.9% better than FLRTS in terms of makespan and 2.6% better than MOMIS and 4.3% better than FLRTS in terms of successful completion of tasks.
AI Driven Cough Voice-Based COVID Detection Framework Using Spectrographic Imaging: An Improved Technology
Dr Kshira Sagar Sahoo, Sardar M N Islam., Soham Chakraborty.,Sushruta Mishra
Source Title: 2022 IEEE 7th International conference for Convergence in Technology, DOI Link,
View abstract ⏷
-
EMCS: An Energy-Efficient Makespan Cost-Aware Scheduling Algorithm Using Evolutionary Learning Approach for Cloud-Fog-Based IoT Applications
Dr Kshira Sagar Sahoo, Ranumayee Sing., Sourav Kumar Bhoi., Niranjan Panigrahi., Muhammad Bilal., Sayed Chhattan Shah
Source Title: Sustainability, DOI Link,
View abstract ⏷
The tremendous expansion of the Internet of Things (IoTs) has generated an enormous volume of near and remote sensing data, which is increasing with the emergence of new solutions for sustainable environments. Cloud computing is typically used to help resource-constrained IoT sensing devices. However, the cloud servers are placed deep within the core network, a long way from the IoT, introducing immense data transactions. These transactions require heavy electricity consumption and release harmful (Formula presented.) to the environment. A distributed computing environment located at the edge of the network named fog computing has been promoted to reduce the limitation of cloud computing for IoT applications. Fog computing potentially processes real-time and delay-sensitive data, and it reduces the traffic, which minimizes the energy consumption. The additional energy consumption can be reduced by implementing an energy-aware task scheduling, which decides on the execution of tasks at cloud or fog nodes on the basis of minimum completion time, cost, and energy consumption. In this paper, an algorithm called energy-efficient makespan cost-aware scheduling (EMCS) is proposed using an evolutionary strategy to optimize the execution time, cost, and energy consumption. The performance of this work is evaluated using extensive simulations. Results show that EMCS is 67.1% better than cost makespan-aware scheduling (CMaS), 58.79% better than Heterogeneous Earliest Finish Time (HEFT), 54.68% better than Bees Life Algorithm (BLA) and 47.81% better than Evolutionary Task Scheduling (ETS) in terms of makespan. Comparing the cost of the EMCS model, it uses 62.4% less cost than CMaS, 26.41% less than BLA, and 6.7% less than ETS. When comparing energy consumption, EMCS consumes 11.55% less than CMaS, 4.75% less than BLA and 3.19% less than ETS. Results also show that with an increase in the number of fog and cloud nodes, the balance between cloud and fog nodes gives better performance in terms of makespan, cost, and energy consumption.
CoviBlock: A Secure Blockchain-Based Smart Healthcare Assisting System
Source Title: Sustainability, DOI Link,
View abstract ⏷
The recent COVID-19 pandemic has underlined the significance of digital health record management systems for pandemic mitigation. Existing smart healthcare systems (SHSs) fail to preserve system-level medical record openness and privacy while including mitigating measures such as testing, tracking, and treating (3T). In addition, current centralised compute architectures are susceptible to denial of service assaults because of DDoS or bottleneck difficulties. In addition, these current SHSs are susceptible to leakage of sensitive data, unauthorised data modification, and non-repudiation. In centralised models of the current system, a third party controls the data, and data owners may not have total control over their data. The Coviblock, a novel, decentralised, blockchain-based smart healthcare assistance system, is proposed in this study to support medical record privacy and security in the pandemic mitigation process without sacrificing system usability. The Coviblock ensures system-level openness and trustworthiness in the administration and use of medical records. Edge computing and the InterPlanetary File System (IPFS) are recommended as part of a decentralised distributed storage system (DDSS) to reduce the latency and the cost of data operations on the blockchain (IPFS). Using blockchain ledgers, the DDSS ensures system-level transparency and event traceability in the administration of medical records. A distributed, decentralised resource access control mechanism (DDRAC) is also proposed to guarantee the secrecy and privacy of DDSS data. To confirm the Coviblocks real-time behaviour on an Ethereum test network, a prototype of the technology is constructed and examined. To demonstrate the benefits of the proposed system, we compare it to current cloud-based health cyberphysical systems (H-CPSs) with blockchain. According to the experimental research, the Coviblock maintains the same level of security and privacy as existing H-CPSs while performing considerably better. Lastly, the suggested system greatly reduces latency in operations, such as 32 milliseconds (ms) to produce a new record, 29 ms to update vaccination data, and 27 ms to validate a given certificate through the DDSS.
E-Learning Course Recommender System Using Collaborative Filtering Models
Dr Kshira Sagar Sahoo, Kalyan Kumar Jena., Sourav Kumar Bhoi., Tushar Kanta Malik.,N Z Jhanjhi., Sajal Bhatia., Fathi Amsaad
Source Title: Electronics, DOI Link,
View abstract ⏷
e-Learning is a sought-after option for learners during pandemic situations. In e-Learning platforms, there are many courses available, and the user needs to select the best option for them. Thus, recommender systems play an important role to provide better automation services to users in making course choices. It makes recommendations for users in selecting the desired option based on their preferences. This system can use machine intelligence (MI)-based techniques to carry out the recommendation mechanism. Based on the preferences and history, this system is able to know what the users like most. In this work, a recommender system is proposed using the collaborative filtering mechanism for e-Learning course recommendation. This work is focused on MI-based models such as K-nearest neighbor (KNN), Singular Value Decomposition (SVD) and neural networkbased collaborative filtering (NCF) models. Here, one lakh of Courseras course review dataset is taken from Kaggle for analysis. The proposed work can help learners to select the e-Learning courses as per their preferences. This work is implemented using Python language. The performance of these models is evaluated using performance metrics such as hit rate (HR), average reciprocal hit ranking (ARHR) and mean absolute error (MAE). From the results, it is observed that KNN is able to perform better in terms of higher HR and ARHR and lower MAE values as compared to other models.
Smart COVID-shield: an IoT driven reliable and automated prototype model for COVID-19 symptoms tracking
Dr Kshira Sagar Sahoo, Hrudaya Kumar Tripathy., Sushruta Mishra., Shubham Suman., Anand Nayyar
Source Title: Computing (Vienna/New York), DOI Link,
View abstract ⏷
IoT technology is revolutionizing healthcare and is transforming it into more personalized healthcare. In the context of COVID-19 pandemic, IoT`s intervention can help to detect its spread. This research proposes an effective Smart COVID-Shield that is capable of automatically detecting prevalent symptoms like fever and coughing along with ensuring social distancing norms are properly followed. It comprises three modules which include Cough Detect Module (CDM) for dry cough detection, Temperature Detect module (TDM) for high-temperature monitoring, and Distance Compute Module (DCM) to track social distancing norm violator. The device comprises a combination of a lightweight fabric suspender worn around shoulders and a flexible belt wrapped around the waist. The suspender is equipped with a passive infrared (PIR) sensor and temperature sensor to monitor persistent coughing patterns and high body temperature and the ultra-sonic sensor verify 6 feet distance for tracking an individual's social distancing norms. The developed model is implemented in an aluminum factory to verify its effectiveness. Results obtained were promising and reliable when compared to conventional manual procedures. The model accurately reported when body temperature rises. It outperformed thermal gun as it accurately recorded a mean of only 4.65 candidates with higher body temperature as compared to 8.59% with the thermal gun. A significant reduction of 3.61% on social distance violators was observed. Besides this, the latency delay of 10.32 s was manageable with the participant count of over 800 which makes it scalable.
Feature-extraction and analysis based on spatial distribution of amino acids for SARS-CoV-2 Protein sequences
Dr Kshira Sagar Sahoo, Rout R K., Hassan S S., Umer S., Gandomi A H., Sheikh S
Source Title: Computers in Biology and Medicine, DOI Link,
View abstract ⏷
The world is currently facing a global emergency due to COVID-19, which requires immediate strategies to strengthen healthcare facilities and prevent further deaths. To achieve effective remedies and solutions, research on different aspects, including the genomic and proteomic level characterizations of SARS-CoV-2, are critical. In this work, the spatial representation/composition and distribution frequency of 20 amino acids across the primary protein sequences of SARS-CoV-2 were examined according to different parameters. To identify the spatial distribution of amino acids over the primary protein sequences of SARS-CoV-2, the Hurst exponent and Shannon entropy were applied as parameters to fetch the autocorrelation and amount of information over the spatial representations. The frequency distribution of each amino acid over the protein sequences was also evaluated. In the case of a one-dimensional sequence, the Hurst exponent (HE) was utilized due to its linear relationship with the fractal dimension (D), i.e. D+HE=2, to characterize fractality. Moreover, binary Shannon entropy was considered to measure the uncertainty in a binary sequence then further applied to calculate amino acid conservation in the primary protein sequences. Fourteen (14) SARS-CoV protein sequences were evaluated and compared with 105 SARS-CoV-2 proteins. The simulation results demonstrate the differences in the collected information about the amino acid spatial distribution in the SARS-CoV-2 and SARS-CoV proteins, enabling researchers to distinguish between the two types of CoV. The spatial arrangement of amino acids also reveals similarities and dissimilarities among the important structural proteins, E, M, N and S, which is pivotal to establish an evolutionary tree with other CoV strains.
A Data Aggregation Approach Exploiting Spatial and Temporal Correlation among Sensor Data in Wireless Sensor Networks
Dr Kshira Sagar Sahoo, Dr Sambit Kumar Mishra, Lucy Dash., Noor Zaman Jhanjhi., Mohammed Baz., Mehedi Masud., Binod Kumar Pattanayak
Source Title: Electronics, DOI Link,
View abstract ⏷
Wireless sensor networks (WSNs) have various applications which include zone surveillance, environmental monitoring, event tracking where the operation mode is long term. WSNs are characterized by low-powered and battery-operated sensor devices with a finite source of energy. Due to the dense deployment of these devices practically it is impossible to replace the batteries. The finite source of energy should be utilized in a meaningful way to maximize the overall network lifetime. In the space domain, there is a high correlation among sensor surveillance constituting the large volume of the sensor network topology. Each consecutive observation constitutes the temporal correlation depending on the physical phenomenon nature of the sensor nodes. These spatio-temporal correlations can be efficiently utilized in order to enhance the maximum savings in energy uses. In this paper, we have proposed a Spatial and Temporal Correlation-based Data Redundancy Reduction (STCDRR) protocol which eliminates redundancy at the source level and aggregator level. The estimated performance score of proposed algorithms is approximately 7.2 when the score of existing algorithms such as the KAB (K-means algorithm based on the ANOVA model and Bartlett test) and ED (Euclidian distance) are 5.2, 0.5, respectively. It reflects that the STCDRR protocol can achieve a higher data compression rate, lower false-negative rate, lower false-positive rate. These results are valid for numeric data collected from a real data set. This experiment does not consider non-numeric values.
Hybrid Approach to Prevent Accidents at Railway: An Assimilation of Big Data, IoT and Cloud
Source Title: Intelligent Systems Reference Library, DOI Link,
View abstract ⏷
Indian Railways, the nations transport lifeline, is the worlds fourth-largest railway network, with more than 70,000 passenger coaches and over 11,000 locomotives. Hence, ensuring the safety and security of the people is a much-prioritized issue these days. According to a survey, an average of 110 accidents occurred each year between 2013 and 2018, in which around 990 people were killed and 1,500 injured. About 54% of accidents were due to reasons like lever crossing, fire, collisions, run away, and suicides. Such problems need to be analyzed intelligently in a cognitive manner based on human behavior and movement. The idea that communities or groups of people provide data similar to those obtainable from a single sensor is known as social sensing. This chapter presents a brief idea about social sensing and how it relates to Big Data. A framework based on the assimilation of Big data, Internet-of-Things, and Cloud Computing is presented for the entire railway operation mechanism. Along with that, some accident prevention techniques with Metaheuristics strategies are also discussed. Finally, the paper concludes with a discussion on some future research axes.
A LSTM-FCNN based multi-class intrusion detection using scalable framework
Dr Kshira Sagar Sahoo, Santosh Kumar Sahu., Durga Prasad Mohapatra., Jitendra Kumar Rout., Quoc Viet Pham., Nhu Ngocdao
Source Title: Computers & Electrical Engineering, DOI Link,
View abstract ⏷
Machine learning methods are widely used to implement intrusion detection models for detecting and classifying intrusions in a network or a system. However, many challenges arise since hackers continuously change the attacking patterns by discovering new system vulnerabilities. The degree of malicious attempts increases rapidly; as a result, conventional approaches fail to process voluminous data. So, a sophisticated detection approach with scalable solutions is required to tackle the problem. A deep learning model is proposed to address the intrusion classification problem effectively. The LSTM (Long Short-Term Memory) and FCN (Fully Connected Network) deep learning approaches classify the benign and malicious connections on intrusion datasets. The objective is to classify multi-class attack patterns more accurately. The proposed deep learning model provides a better classification result in two-class and five-class problems. It achieves an accuracy of 98.52%, 98.94%, 99.03%, 99.36%, 100%, and 99.64% using KDDCup99, NSLKDD, GureKDD, KDDCorrected, Kyoto, NITRIDS dataset respectively.
A Systematic Review on Osmotic Computing
Dr Kshira Sagar Sahoo, Sanjaya Kumar Panda., Amir H Gandomi., Benazir Neha., Pradip Kumar Sahu
Source Title: ACM Transactions on Internet of Things, DOI Link,
View abstract ⏷
Osmotic computing in association with related computing paradigms (cloud, fog, and edge) emerges as a promising solution for handling bulk of security-critical as well as latency-sensitive data generated by the digital devices. It is a growing research domain that studies deployment, migration, and optimization of applications in the form of microservices across cloud/edge infrastructure. It presents dynamically tailored microservices in technology-centric environments by exploiting edge and cloud platforms. Osmotic computing promotes digital transformation and furnishes benefits to transportation, smart cities, education, and healthcare. In this article, we present a comprehensive analysis of osmotic computing through a systematic literature review approach. To ensure high-quality review, we conduct an advanced search on numerous digital libraries to extracting related studies. The advanced search strategy identifies 99 studies, from which 29 relevant studies are selected for a thorough review. We present a summary of applications in osmotic computing build on their key features. On the basis of the observations, we outline the research challenges for the applications in this research field. Finally, we discuss the security issues resolved and unresolved in osmotic computing.
Model updating using causal information: a case study in coupled slab
Dr Kshira Sagar Sahoo, Kunal Tiwary., Sanjaya Kumar Patro., Amir H Gandomi
Source Title: Structural and Multidisciplinary Optimization, DOI Link,
View abstract ⏷
Problems like improper sampling (sampling on unnecessary variables) and undefined prior distribution (or taking random priors) often occur in model updating. Any such limitations on model parameters can lead to lower accuracy and higher experimental costs (due to more iterations) of structural optimisation. In this work, we explored the effective dimensionality of the model updating problem by leveraging the causal information. In order to utilise the causal structure between the parameters, we used Causal Bayesian Optimisation (CBO), a recent variant of Bayesian Optimisation, to integrate observational and intervention data. We also employed generative models to generate synthetic observational data, which helps in creating a better prior for surrogate models. This case study of a coupled slab structure in a recreational building resulted in the modal updated frequencies which were extracted from the finite element of the structure and compared to measured frequencies from ambient vibration tests found in the literature. The results of mode shapes between experimental and predicted values were also compared using modal assurance criterion (MAC) percentages. The updated frequency and MAC number that was obtained using the proposed model was found in least number of iterations (impacts experimental budget) as compared to previous approaches which optimise the same parameters using same data. This also shows how the causal information has impact on experimental budget.
IoT-EMS: An Internet of Things Based Environment Monitoring System in Volunteer Computing Environment
Dr Kshira Sagar Sahoo, Sultan Aljahdali., N Z Jhanjhi., Mehedi Masud., Sourav Kumar Bhoi., Sanjaya Kumar Panda., Kalyan Kumar Jena
Source Title: Intelligent Automation and Soft Computing, DOI Link,
View abstract ⏷
Environment monitoring is an important area apart from environmental safety and pollution control. Such monitoring performed by the physical models of the atmosphere is unstable and inaccurate. Machine Learning (ML) techniques on the other hand are more robust in capturing the dynamics in the environment. In this paper, a novel approach is proposed to build a cost-effective standardized environment monitoring system (IoT-EMS) in volunteer computing environment. In volunteer computing, the volunteers (people) share their resources for distributed computing to perform a task (environment monitoring). The system is based on the Internet of Things and is controlled and accessed remotely through the Arduino platform (volunteer resource). In this system, the volunteers record the environment information from the surrounding through different sensors. Then the sensor readings are uploaded directly to a web server database, from where they can be viewed anytime and anywhere through a website. Analytics on the gathered time-series data is achieved through ML data modeling using R Language and RStudio IDE. Experimental results show that the system is able to accurately predict the trends in temperature, humidity, carbon monoxide level, and carbon dioxide. The prediction accuracy of different ML techniques such as MLP, k-NN, multiple regression, and SVM are also compared in different scenarios.
An Effective Probabilistic Technique for DDoS Detection in OpenFlow Controller
Source Title: IEEE Systems Journal, DOI Link,
View abstract ⏷
Distributed denial of service (DDoS) attacks have always been a nightmare for network infrastructure for the last two decades. Existing network infrastructure is lacking in identifying and mitigating the attack due to its inflexible nature. Currently, software-defined networking (SDN) is more popular due to its ability to monitor and dynamically configure network devices based on the global view of the network. In SDN, the control layer is accountable for forming all decisions in the network and data plane for just forwarding the message packets. The unique property of SDN has brought a lot of excitement to network security researchers for preventing DDoS attacks. In this article, for the identification of DDoS attacks in the OpenFlow controller, a probabilistic technique with a central limit theorem has been utilized. This method primarily detects resource depletion attacks, for which the DARPA dataset is used to train the probabilistic model. In different attack scenarios, the probabilistic approach outperforms the entropy-based method in terms of false negative rate (FNR). The emulation results demonstrate the efficacy of the approach, by reducing the FNR by 98% compared to 78% in the existing entropy mechanism, at 50% attack rate.
Demand-Supply Based Economic Model for Resource Provisioning in Industrial IoT Traffic
Dr Kshira Sagar Sahoo, Mayank Tiwary., Ashish Kr Luhach., Anand Nayyar., Kim Kwang Raymond Choo., Muhammad Bilal
Source Title: IEEE Internet of Things Journal, DOI Link,
View abstract ⏷
Software-defined networks (SDNs) can help facilitate dynamic network resource provisioning in demanding applications, such as those involving Industrial Internet of Things (IIoT) devices and systems. For example, SDN-based systems can support increasing demands of multitenancy at the network layer, complex demands of microservices, etc. A typical (large) manufacturing setting generally comprises a broad and diverse range of IoT devices and applications to support different services (e.g., transactions on enterprise resource planning (ERP) software, maintenance prediction, asset management, and outage prediction). Hence, this work introduces a demand-supply-based economic model to enhance the efficiency of different multitenancy attributes at the network layer, which captures the computational complexity of industrial ERP-IoT transactions and performs network resource provisioning, based on the demand-supply principle. The proposed model is accompanied by a flow scheduler, which dynamically assigns ERP-IoT traffic flow entries on network devices to specific preconfigured queues. This scheduler is used to increase service providers' utility. The evaluation of the proposed model suggests the utility of our proposed approach.
Vision Navigator: A Smart and Intelligent Obstacle Recognition Model for Visually Impaired Users
Source Title: Mobile Information Systems, DOI Link,
View abstract ⏷
Vision impairment is a major challenge faced by humanity on a large scale throughout the world. Affected people find independently navigating and detecting obstacles extremely tedious. Thus, a potential solution for accurately detecting obstacles requires an integrated deployment of the Internet of Things and predictive analytics. This research introduces "Vision Navigator,"a novel framework for assisting visually impaired users in obstacle analysis and tracking so that they can move independently. An intelligent stick named "Smart-fold Cane"and sensor-equipped shoes called "Smart-alert Walker"are the main constituents of our proposed model. For object detection and classification, the stick uses a single-shot detection (SSD) mechanism, which is followed by frame generation using the recurrent neural network (RNN) model. Smart-alert Walker is a lightweight shoe that acts as an emergency unit that notifies the user regarding the presence of any obstacle within a short distance range. This intelligent obstacle detection model using the SSD-RNN approach was deployed in real time and its performance was validated in indoor and outdoor environments. The SSD-RNN model computed an optimum accuracy of 95.06% and 87.68% indoors and outdoors, respectively. The model was also evaluated in the context of users' distance from obstacles. The proposed SSD-RNN model had an accuracy rate of 96.4% and 86.8% for close and distant obstacles, respectively, outperforming other models. Execution time for the SSD-RNN model was 4.82 s with the highest mean accuracy rate of 95.54% considering all common obstacles.
Air Quality Index Analysis of Indian Cities During COVID-19 Using Machine Learning Models: A Comparative Study
Dr Kshira Sagar Sahoo, Lopamudra Hota., Prasant Kumar Dash., Amir H Gandomi
Source Title: 2021 8th International Conference on Soft Computing & Machine Intelligence (ISCMI), DOI Link,
View abstract ⏷
Rapid urbanisation has led to degradation in air quality index in past decades caused by pollutants generated by factories, industries and transportation. Designing an automated system for air quality tracking and monitoring is essential for generating awareness. Restrictions imposed by COVID 19 lockdown has resulted in the degradation of pollutants in air and having a great impact on air pollution management. An analysis of air pollution index based on vehicular pollutants and industrial pollutants is done, depicting the most polluted cities in India. Various machine learning models are compared so, as to figure out a better model for classification and analysis. Results delineate that Delhi was one of the most polluted cities before lockdown but shown a tremendous decrease in air pollution index after lockdown. Further boosting models proved to outperform other models in the prediction and forecasting of air quality index.
TBDDoSA-MD: Trust-Based DDoS Misbehave Detection Approach in Software-defined Vehicular Network (SDVN)
Dr Kshira Sagar Sahoo, Nz Jhanjhi., Rajendra Prasad Nayak., Srinivas Sethi., Sourav Kumar Bhoi., Thamer A Tabbakh., Zahrah A Almusaylim
Source Title: Computers, Materials and Continua, DOI Link,
View abstract ⏷
Reliable vehicles are essential in vehicular networks for effective communication. Since vehicles in the network are dynamic, even a short span of misbehavior by a vehicle can disrupt the whole network which may lead to catastrophic consequences. In this paper, a Trust-Based Distributed DoS Misbehave Detection Approach (TBDDoSA-MD) is proposed to secure the Software-Defined Vehicular Network (SDVN). A malicious vehicle in this network performs DDoS misbehavior by attacking other vehicles in its neighborhood. It uses the jamming technique by sending unnecessary signals in the network, as a result, the network performance degrades. Attacked vehicles in that network will no longer meet the service requests from other vehicles. Therefore, in this paper, we proposed an approach to detect the DDoS misbehavior by using the trust values of the vehicles. Trust values are calculated based on direct trust and recommendations (indirect trust). These trust values help to decide whether a vehicle is legitimate or malicious. We simply discard the messages from malicious vehicles whereas the authenticity of the messages from legitimate vehicles is checked further before taking any action based on those messages. The performance of TBDDoSA-MD is evaluated in the Veins hybrid simulator, which uses OMNeT++ and Simulation of Urban Mobility (SUMO).We compared the performance of TBDDoSA-MD with the recently proposed Trust-Based Framework (TBF) scheme using the following performance parameters such as detection accuracy, packet delivery ratio, detection time, and energy consumption. Simulation results show that the proposed work has a high detection accuracy of more than 90% while keeping the detection time as low as 30 s.
A Vicenary Analysis of SARS-CoV-2 Genomes
Dr Kshira Sagar Sahoo, Sk Sarif Hassan., Ranjeet Kumar Rout., Thamer A Tabbakh., Zahrah A Almusaylim., Nz Jhanjhi., Saiyed Umer
Source Title: Computers, Materials and Continua, DOI Link,
View abstract ⏷
Coronaviruses are responsible for various diseases ranging from the common cold to severe infections like the Middle East syndromes and the severe acute respiratory syndrome. However, a new coronavirus strain known as COVID-19 developed into a pandemic resulting in an ongoing global public health crisis. Therefore, there is a need to understand the genomic transformations that occur within this family of viruses in order to limit disease spread and develop new therapeutic targets. The nucleotide sequences of SARS-CoV-2 are consist of several bases. These bases can be classified into purines and pyrimidines according to their chemical composition. Purines include adenine (A) and guanine (G), while pyrimidines include cytosine (C) and tyrosine (T). There is a need to understand the spatial distribution of these bases on the nucleotide sequence to facilitate the development of antivirals (including neutralizing antibodies) and epitomes necessary for vaccine development. This study aimed to evaluate all the purine and pyrimidine associations within the SARS-CoV-2 genome sequence by measuring mathematical parameters including; Shannon entropy, Hurst exponent, and the nucleotide guanine-cytosine content. The Shannon entropy is used to identify closely associated sequences. Whereas Hurst exponent is used to identifying the auto-correlation of purine-pyrimidine bases even if their organization differs. Different frequency patterns can be used to determine the distribution of all four proteins and the density of each base. The GC-content is used to understand the stability of the DNA. The relevant genome sequences were extracted from the National Center for Biotechnology Information (NCBI) virus database. Furthermore, the phylogenetic properties of the COVID-19 virus were characterized to compare the closeness of the COVID-19 virus with other coronaviruses by evaluating the purine and pyrimidine distribution.
SDCF: A Software-Defined Cyber Foraging Framework for Cloudlet Environment
Dr Kshira Sagar Sahoo, S Nithya., M Sangeetha., K N Apinaya Prethi.,Sanjaya Kumar Panda., Amir H Gandomi
Source Title: IEEE Transactions on Network and Service Management, DOI Link,
View abstract ⏷
-
An Ensemble-Based Scalable Approach for Intrusion Detection Using Big Data Framework
Dr Kshira Sagar Sahoo, Santosh Kumar Sahu., Durga Prasad Mohapatra., Jitendra Kumar Rout.,Ashish Kr Luhach
Source Title: Big Data, DOI Link,
View abstract ⏷
We set up a scalable framework for large-scale data processing and analytics using the big data framework. The popular classification methods are implemented, tuned, and evaluated by using intrusion datasets. The objective is to select the best classifier after optimizing the hyper-parameters. We observed that the decision tree (DT) approach outperforms compared with other methods in terms of classification accuracy, fast training time, and improved average prediction rate. Therefore, it is selected as a base classifier in our proposed ensemble approach to study class imbalance. As the intrusion datasets are imbalanced, most of the classification techniques are biased toward the majority class. The misclassification rate is more in the case of the minority class. An ensemble-based method is proposed by using K-Means, RUSBoost, and DT approaches to mitigate the class imbalance problem; empirically investigate the impact of class imbalance on classification approaches' performance; and compare the result by using popular performance metrics such as Balanced Accuracy, Matthews Correlation Coefficient, and F-Measure, which are more suitable for the assessment of imbalanced datasets.
Energy Efficiency in Software Defined Networking: A Survey
Dr Kshira Sagar Sahoo, Sudhansu Sekhar Patra., Bibhudatta Sahoo., Deepak Puthal., Suchismita Rout
Source Title: SN Computer Science, DOI Link,
View abstract ⏷
Software defined networking has solved many challenging issues in the field of networking industry. It separates the control plane from the data forwarding plane. This makes SDN to be more powerful than traditional networking. However, energy cost enhances the overall network cost. Therefore, this issue needs to be addressed to improve design requirements and boost the networking performance. In this article, several energy efficiency techniques have been discussed. To represent it in more detail, a thematic taxonomy of energy efficiency techniques in SDN is given by considering several technical studies of the past research. These studies have been categorized into three sub categories of traffic aware model, end-host aware model and finally rule placement. These models are provided with detailed objective functions, parameters, constraints and detailed information. Furthermore, useful visions of each approach, its advantages and disadvantages and compressive analysis of energy efficiency techniques are also discussed. Finally, the paper is highlighted with the future directions for energy efficiency in SDN.
IoT-IIRS: Internet of Things based intelligent-irrigation recommendation system using machine learning approach for efficient water usage
Dr Kshira Sagar Sahoo, Anand Nayyar., Ashutosh Bhoi., Rajendra Prasad Nayak., Sourav Kumar Bhoi., Srinivas Sethi., Sanjaya Kumar Panda
Source Title: PeerJ. Computer science, DOI Link,
View abstract ⏷
In the traditional irrigation process, a huge amount of water consumption is required which leads to water wastage. To reduce the wasting of water for this tedious task, an intelligent irrigation system is urgently needed. The era of machine learning (ML) and the Internet of Things (IoT) brings it is a great advantage of building an intelligent system that performs this task automatically with minimal human effort. In this study, an IoT enabled ML-trained recommendation system is proposed for efficient water usage with the nominal intervention of farmers. IoT devices are deployed in the crop field to precisely collect the ground and environmental details. The gathered data are forwarded and stored in a cloud-based server, which applies ML approaches to analyze data and suggest irrigation to the farmer. To make the system robust and adaptive, an inbuilt feedback mechanism is added to this recommendation system. The experimentation, reveals that the proposed system performs quite well on our own collected dataset and National Institute of Technology (NIT) Raipur crop dataset.
CLAPS: Course and Lecture Assignment Problem Solver for Educational Institution Using Hungarian Method
Dr Kshira Sagar Sahoo, Mamoona Humayn., Chittaranjan Mallick., Sourav Kumar Bhoi., Kalyan Kumar Jena.,Mudassar Hussain Shahd
Source Title: Turkish Journal of Computer and Mathematics Education, DOI Link,
View abstract ⏷
-
Bankruptcy Prediction using Robust Machine Learning Model
Dr Kshira Sagar Sahoo, Amer Tabbakh., Jitendra Kumar Rout.,Mudassar Hussain Shah., Minakhi Rout., N Z Jhanjh
Source Title: Turkish Journal of Computer and Mathematics Education, DOI Link,
View abstract ⏷
-
Structural Mining for Link Prediction Using Various Machine Learning Algorithms
Dr Kshira Sagar Sahoo, Ranjan Kumar Behera.,Debadatt Naik., Santanu Kumar Rath., Bibhudatta Sahoo
Source Title: International Journal of Social Ecology and Sustainable Development, DOI Link,
View abstract ⏷
Link prediction is an emerging research problem in social network analysis, where future possible links are predicted based on the structural or the content information associated with the network. In this paper, various machine learning (ML) techniques have been utilized for predicting the future possible links based on the features extracted from the topological structure. Moreover, feature sets have been prepared by measuring different similarity metrics between all pair of nodes between which no link exists. For predicting the future possible links various supervised ML algorithms like K-NN, MLP, bagging, SVM, decision tree have been implemented. The feature set for each instance in the dataset has been prepared by measuring the similarity index between the non-existence links. The model has been trained to identify the new links which are likely to appear in the future but currently do not exist in the network. Further, the proposed model is validated through various performance metrics.
Imperative Dynamic Routing Between Capsules Network for Malaria Classification
Dr Kshira Sagar Sahoo, N Z Jhanjhi., G Madhu., A Govardhan., B Sunil Srinivas.,K S Vardhan., B Rohit
Source Title: Computers, Materials and Continua, DOI Link,
View abstract ⏷
Malaria is a severe epidemic disease caused by Plasmodium falciparum. The parasite causes critical illness if persisted for longer durations and delay in precise treatment can lead to further complications. The automatic diagnostic model provides aid for medical practitioners to avail a fast and efficient diagnosis. Most of the existing work either utilizes a fully connected convolution neural network with successive pooling layers which causes loss of information in pixels. Further, convolutions can capture spatial invariances but, cannot capture rotational invariances. Hence to overcome these limitations, this research, develops an Imperative Dynamic routing mechanism with fully trained capsule networks for malaria classification. This model identifies the presence of malaria parasites by classifying thin blood smears containing samples of parasitized and healthy erythrocytes. The proposed model is compared and evaluated with novel machine vision models evolved over a decade such as VGG, ResNet, DenseNet, MobileNet. The problems in previous research are cautiously addressed and overhauled using the proposed capsule network by attaining the highest Area under the curve (AUC) and Specificity of 99.03% and 99.43% respectively for 20% test samples. To understand the underlying behavior of the proposed network various tests are conducted for variant shuffle patterns. The model is analyzed and assessed in distinct environments to depict its resilience and versatility. To provide greater generalization, the proposed network has been tested on thick blood smear images which surpassed with greater performance.