Scalable Computing: Practice and Experience
//www.scpe.org/index.php/scpe
<p style="text-align: justify;"><span style="text-decoration: underline;"><em>Topics of interest</em></span>. The area of scalable computing has matured and reached a point where new issues and trends require a professional forum. SCPE provides this avenue by publishing original refereed papers that address the present as well as the future of parallel and distributed computing. The journal focuses on algorithm development, implementation and execution on parallel and distributed architectures, as well on application of parallel and distributed computing to the solution of real-life problems.</p> <p style="text-align: justify;"><span style="text-decoration: underline;"><em>Electronic journal</em></span>. SCPE provides immediate open access to its content following the principle that making research freely available to the public supports a greater global exchange of knowledge. We invite you to have a look to the open content of the volumes and to consider to interact with this publication willing to promote your results and achievements. Publication or access fees are not requested.</p> <p style="text-align: justify;"><span style="text-decoration: underline;"><em>Indexing.</em></span> The journal is indexed by several organizations (see a complete list <a title="Journal Indexing" href="https://scpe.org/index.php/scpe/JournalIndexing">here</a>). Current impact factors are the followings: position <a href="https://www.scopus.com/sourceid/21100208072?origin=sbrowse#tabs=1">128 from 206 journals </a>in Computer Science in Scopus, 24 as h-index computed by Publish and Perish. SCPE is included in the <a title="SCPE in extended list of Thompson Reuters" href="http://science.thomsonreuters.com/cgi-bin/jrnlst/jlresults.cgi?PC=EX&ISSN=1895-1767" target="_blank" rel="noopener">Clarivate Analytics (former Thompson Reuters) Emerging Sources Citation Index</a> and appears from 2015 in the Web of Science collection. In 2021 the Journal was listed in Q4 of JCR.</p> <p style="text-align: justify;"><span style="text-decoration: underline;"><em>Publicity</em></span>. SCPE flyer is available <a href="http://web.info.uvt.ro/~petcu/SCPE-flyer.pdf">here</a>.</p> <p class="signature"> </p>West University of Timisoara, ROMANIAen-USScalable Computing: Practice and Experience1895-1767Software Effort Estimation using Machine Learning Algorithms
//www.scpe.org/index.php/scpe/article/view/2213
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">Effort estimation is a crucial aspect of software development, as it helps project managers plan, control, and schedule the development of software systems. This research study compares various machine learning techniques for estimating effort in software development, focusing on the most widely used and recent methods. The paper begins by highlighting the significance of effort estimation and its associated difficulties. It then presents a comprehensive overview of the different categories of effort estimation techniques, including algorithmic, model-based, and expert-based methods. The study concludes by comparing methods for a given software development project. Random Forest Regression algorithm performs well on the given dataset tested along with various Regression algorithms, including Support Vector, Linear, and Decision Tree Regression. Additionally, the research identifies areas for future investigation in software effort estimation, including the requirement for more accurate and reliable methods and the need to address the inherent complexity and uncertainty in software development projects. This paper provides a comprehensive examination of the current state-of-the-art in software effort estimation, serving as a resource for researchers in the field of software engineering.</p>Kruti LavingiaRaj PatelVivek PatelAmi Lavingia
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-242521276128510.12694/scpe.v25i2.2213An Elixir for Blockchain Scalability with Channel based Clustered Sharding
//www.scpe.org/index.php/scpe/article/view/2441
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">Blockchain refers to distributed ledger technology which stores records without the help of a central authority. Born with bitcoin, this brainstorming technology finds its applications in healthcare, land registry, education, pharmaceutical industry, digital records, manufacturing companies and so on. The properties of blockchain such as immutability, distributed nature, tamper-resistant made it a disruptive technology in many applications. The highlighting feature of this pioneering technology is the distributed storage of ledger on all the nodes of the network. This helps to achieve decentralization without the trust for third party. The transactions are proposed, executed, validated and are then added as blocks to the blockchain. The problems with all the blockchain framework is scalability with respect to storage space and throughput. Scalability is the most significant factor to be considered in this big data era. This article proposes a solution called Channel Based Clustered Sharding (CBCS) approach for Hyperledger fabric blockchain framework. In this work, a lookup table is maintained which helps in forwarding the transactions to the clustered shards for validation. The CBCS approach helps in parallel transaction processing which in turn improves scalability and throughput of the system. The performance of the proposed work is measured with the help of Hyperledger caliper, a benchmarking tool for the performance analysis of Hyperledger fabric. The results show that the performance of the proposed system is increased from 3000 tps to 30,000 tps.</p> <p style="-qt-paragraph-type: empty; -qt-block-indent: 0; text-indent: 0px; margin: 0px;"> </p> <p> </p>V. Vinoth KumarU. PadmavathiC. Prasanna RanjithJ BalajiC.N.S. Vinoth Kumar
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-24252997100410.12694/scpe.v25i2.2441Feature Extraction and Classification of Gray-Scale Images of Brain Tumor using Deep Learning
//www.scpe.org/index.php/scpe/article/view/2456
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">Deep Learning using CNN plays a paramount role for the classification methods applied on medical image data. With a crucial role in accurate diagnosis, treatment planning and patient management for medical and healthcare systems, CNNs won accolades in the Deep Learning research. As simple the learning model so precise are the results for decision making. The proposed Sequential model of CNN is built with Parametric ReLU with the values aligned to geometric mean, attains a specific goal of tumor classification. The additional support of ground-truth aid in deciding the shape and severity of tumor in the Grayscale MRI of brain tumor. The simple Sequential model, although a minimal version has proved achieved significant classification goals using the GMP-ReLU. Comparative results with variants of ReLU have been charted in this article standing with the proof of consistent classification model with parametric-ReLU. The proposed design is conducted on images from Kaggle and a model is trained (classifier is built), which can be considered as ideal filter for all the benchmark images. The accuracy of proposed design is considerably improved compared to normal ReLU up to 89.214%.</p>Pranitha KondraNaresh Vurukonda
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-242521005101710.12694/scpe.v25i2.2456A Class Specific Feature Selection Method for Improving the Performance of Text Classification
//www.scpe.org/index.php/scpe/article/view/2502
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">Recently, a significant amount of research work has been carried out in the field of feature selection. Although these methods help to increase the accuracy of the machine learning classification, the selected subset of features considers all the classes and may not select recommendable features for a particular class. The main goal of our paper is to propose a new class-specific feature selection algorithm that is capable of selecting an appropriate subset of features for each class. In this regard, we first perform class binarization and then select the best features for each class. During the feature selection process, we deal with class imbalance problems and redundancy elimination. The Weighted Average Voting Ensemble method is used for the final classification. Finally, we carry out experiments to compare our proposed feature selection approach with the existing popular feature selection methods. The results prove that our feature selection method outperforms the existing methods with an accuracy of more than 37%.</p>Venkatesh VSharan S BMahalaxmy SMonisha SAshick Sanjey D SAshokkumar P
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-242521018102810.12694/scpe.v25i2.2502Breast Cancer Image Classification based on Adaptive Interpolation Approach Using Clinical Dataset
//www.scpe.org/index.php/scpe/article/view/2523
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">In the healthcare and bioinformatics disciplines, the categorization of breast cancer has become an emerging paradigm due to the second most common cause of cancer-related mortality in women. A biopsy is a procedure in which tissue has been examined to determine whether or not it is breast cancer through histopathologists that may lead to a mistaken diagnosis. The research is mainly focused on the patient data (884 case reports of patients) is acquired from the American Oncology Institute implemented with preprocessing techniques consists of missing values and those values are recovered with Novel Modified Interpolation (MI) Method. Deep learning networks effectively detect and assess the pattern for annotating histological data based on the labelling, which preserves time, system cost and enhances the system accuracy. This framework addressed feature acquisition and missing analysis strategies based on entropy confidence weight factor. First, the iterative patterns have been treated as a potential diagnostic rule, and the attention-based rule combination formulates the classification issue based on integrating convolutional and recurrent neural networks, and the short-term and long-term spatial correlations between patches. Second, the key part of label construction is carried out with an entropy confidence-weight factor assessment which detects and predict different patterns to construct the rule for classification. Third, optimization of clustering data by assessing missing parameters based on mean square error and the concept of interpolation to reduce data loss by around 20% and enhance the system accuracy. Simulation results show that the proposed system achieves 91.3% accuracy to state-of-art approaches, potentially allied in clinical applications. Modified Interpolation (MI) method recovered missing values with least mean square error and less data loss of 0.0123 and 1.38% respectively. This method also compared with existing Linear Interpolation (LI) method which is able to recover least men square error and data loss of 3.295 and 18.925% respectively. Comparatively the modified interpolation method recovered the missing values with less mean square error and less data loss.</p>Sushmitha UddarajuKousalya AHemalatha IMaragatharajan MBala Subramanian CSathish Kumar L
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-242521029103910.12694/scpe.v25i2.2523SecureSense: Enhancing Person Verification through Multimodal Biometrics for Robust Authentication
//www.scpe.org/index.php/scpe/article/view/2524
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">Biometrics provide enhanced security and convenience compared to conventional methods of individual authentication. A more robust and effective method of individual authentication has emerged due to recent advancements in multimodal biometrics. Unimodal systems offer lower security and lack the robustness found in multimodal biometric systems. The research paper introduces a novel approach, employing multiple biometric modalities, including face, fingerprint, and iris, to authenticate users in a multimodal biometric system. The paper proposes the ”Secure Sense” framework, which combines multiple biometric modalities to improve person verification accuracy. The proposed system utilizes both web-based and real-time datasets. For the web-based dataset, we employed the Chicago Face dataset for facial data, the MMU1 dataset for iris data, and the SOCOfing dataset for fingerprint data. In real-time data collection, facial data is captured using a Zebronics Zeb-Gem webcam, fingerprint data is obtained using the Mantra MFS scanner, and iris data is collected using the Mantra MIS scanner. In the envisioned system, we introduce an innovative approach that employs a decision-level fusion technique across three unique biometric modalities, resulting in an impressive accuracy rate of approximately 93% across all modalities.</p>Samatha JMadhavi G
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-242521040105410.12694/scpe.v25i2.2524Radiogenomics in Oncology
//www.scpe.org/index.php/scpe/article/view/2539
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">The emergence of artificial intelligence in the digital era has brought about a significant transformation in the field of clinical decision support systems. The advent of technological advancements has led to the development of novel data-driven analytical algorithms, hence greatly augmenting human capacity to process information. The field of cancer radiogenomics presents a promising area within the realm of precision medicine. The objective of our research is to enhance our understanding of the genetic factors that contribute to the formation of tumors. This will be achieved by integrating extensive radiomics features extracted from medical imaging, genetic data obtained from clinical-epidemiological sources, and insights derived from high-throughput sequencing using mathematical modelling techniques. The aim of integrating radiomics and genomes is to gain a deeper understanding of the complex mechanisms behind cancer growth. The primary aim is to develop novel, empirically supported methodologies for the identification, prediction, and individualized therapeutic strategies for cancer, utilizing the acquired understanding. This comprehensive review aims to provide an overview of the existing body of research on the applications of radiogenomics, with a specific focus on solid malignancies. Additionally, we will examine the barriers that are now preventing the widespread integration of radiomics into therapeutic contexts.</p>Sowmya V LBharathi Malakreddy ASanthi Natarajan
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-242521055107210.12694/scpe.v25i2.2539Sensor based Dance Coherent Action Generation Model using Deep Learning Framework
//www.scpe.org/index.php/scpe/article/view/2648
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">Dance Coherent Action Generation is a popular research task in recent years to generate movements and actions for computer-generated characters in a simulated environment. It is sometimes referred to as "Motion Synthesis". Motion synthesis algorithms are used to generate physically believable, visually compelling, and contextually appropriate movement using motion sensors. The Dance Coherent Action Generation Model (DCAM) is a generative framework for producing aesthetically pleasing movements using deep learning from small amounts of data. By learning an internal representation of motion dynamics, DCAM can synthesize long sequences of movements in which coherent patterns can be created through latent space interpolation. This framework provides a mechanism for varying the amplitude of the generated motion, allowing further realistic thinking and expression. The proposed model obtained 93.79% accuracy, 93.79% precision, 97.75% recall and 92.92% F1 score. DCAM exploits the balance between imitation and creativity by enabling the production of novel outputs from limited input data and can be trained in an unsupervised manner or fine-tuned with sparse supervision. Furthermore, the framework is easily extended to handle multiple layers of abstraction and can be further personalized to a particular type of movement, enabling the generation of highly individualized outputs.</p>Hanzhen JiangYingdong Yan
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-242521073109010.12694/scpe.v25i2.2648IoT based Dance Movement Recognition Model based on Deep Learning Framework
//www.scpe.org/index.php/scpe/article/view/2651
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">Deep Learning is becoming an emerging field in the Internet of Things (IoT) due to its ability to provide a comprehensive approach to automatic feature extraction and predictive modeling for analysis and decision-making. This paper introduces an IoT-based Dance Movement Recognition Model based on a Deep Learning Framework. The framework consists of a convolutional neural network (CNN) with a data-centric architecture to identify dance movements from the acquired data gathered by an IoT device. The IoT device collects 3D motion data captured by three accelerometers. Feature extraction is then done with the CNN architecture, resulting in a flattened matrix representing the movement. Subsequently, a Multi-Layer Perception (MLP) is used to classify the movements. The proposed system is experimentally evaluated on a standardized dataset of 16 dance steps with three-speed levels. The results show that our model outperforms state-of-the-art approaches in accuracy, evaluation time, and classification accuracy. The proposed model reached 90.74% accuracy, 87.12% precision, 83.78% recall and 84.39% F1-Score. The proposed model can serve as a basis for a reliable and intuitive system that can be used to monitor patient’s dance movements with accuracy.</p> <p style="-qt-paragraph-type: empty; -qt-block-indent: 0; text-indent: 0px; margin: 0px;"> </p>Zhen JiYaonong Tian
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-242521091110610.12694/scpe.v25i2.2651Methodology for Developing an IoT-based Parking Space Counter System using XNO
//www.scpe.org/index.php/scpe/article/view/2459
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">The Blockchain-IoT integration is treated as the future technological value by many adopters despite the cost or complexity involved. But, there are technological advancements brought in by communities that make solutions affordable and simple to the fact that they can be used in applications such as parking space counters. This research portrays use of XNO which is a digital currency in an alternative way to keep track of available parking spaces via IoT nodes installed at entry and exit points of parking lots. The available parking spaces data using this approach can be displayed on LED boards at the entry point of the parking lots and on a website for remote status view. An add-on for this research is the issue of entry tickets with timestamp and unique ID by using the block data during asset transfer. This research can be further enhanced for collection of parking fees by the help of IoT nodes at the exit points of the parking lots.</p> <p style="-qt-paragraph-type: empty; -qt-block-indent: 0; text-indent: 0px; margin: 0px;"> </p>Sujanavan TiruvayipatiRamadevi YellasiriVikram NarayandasArchana MaruthavananAnupama Meduri
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425280081110.12694/scpe.v25i2.2459A Study of Blockchain and Machine Learning-Enabled IoT Security in Time-Delayed Neural Network Vocal Pattern Recognition to Improve Web-Based Vocal Teaching
//www.scpe.org/index.php/scpe/article/view/2645
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">With the development of information technology, online vocal teaching is becoming more and more popular, but the sound quality of teaching is also becoming more and more demanding. As online vocal instruction becomes more popular, the need for high-quality sound in these digital environments becomes more critical. This research tackles the problem of improving sound quality in real-time vocal teaching by integrating advanced technologies such as Blockchain and Machine Learning within the Internet of Things (IoT) security framework. We created a vocal recognition model using Time-Delay Neural Network (TDNN) and improved it with Generated Feature Vector (GFV). This integration yields a strong GTDNN vocal recognition system that is specifically designed to secure and optimize web-based vocal teaching. Our experiments show that GTDNN outperforms traditional TDNN and i-vector methods in feature vector extraction, adapting well to different speech environments. In various speech settings, GTDNN's Error Rates (EERs) are impressively low at 11.3%, 12.0%, 4.9%, 6.2%, and 6.1%, indicating superior performance over comparison models. GTDNN has an EER of 9.6% for short-duration speech and 2.3% for long-duration speech. Furthermore, the GTDNN system achieves an overall pass rate of 94% for target speech and an impressive rejection rate for non-target speech, ensuring high accuracy in a variety of speech environments.</p>Kaiyi Long
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425281282310.12694/scpe.v25i2.2645Application of Genetic Algorithm in Optimization Simulation of Industrial Waste Land Reuse
//www.scpe.org/index.php/scpe/article/view/2483
<p>In order to better understand the application of optimization simulation for industrial waste land reuse, the author proposes an application study based on nonlinear genetic algorithm in the optimization simulation of industrial waste land reuse. The author takes the landscape renovation and reuse of industrial waste sites as the research object, and through research on the current situation of landscape renovation and reuse of industrial waste sites both domestically and internationally, as well as on-site inspections, attempts to use landscape design techniques to deal with this once glorious but destructive industrial landscape that has already declined. Secondly, a genetic algorithm for enhancing the timeliness of industrial waste land reuse is proposed, which is based on random walks, combine users' long-term and short-term preferences to calculate the most suitable Top-N industrial waste land reuse optimization model for the current period. Finally, the two algorithms proposed by the author were experimentally validated on the dataset. In the CiteU Like dataset, the best performance was achieved at a=0.4, while in the JD dataset, the best performance was achieved at a=0.6. When k=6, the hit rate significantly decreases by about 50%. The URT-R genetic algorithm exhibits a high recommendation hit rate in recommendations targeting timeliness. The author analyzed the different characteristics of industrial waste reuse in scenic areas and optimized their essence and transformation methods, further improving the transformation and renewal methods of industrial waste land in the process of urban development in China. I hope to provide useful references for future research on related topics and practices.</p>Peng BaiYunan ZhaoJunjia Chang
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425282483110.12694/scpe.v25i2.2483Space Layout Simulation of Assembled Nanoarchitecture Based on Improved Particle Swarm Optimization
//www.scpe.org/index.php/scpe/article/view/2484
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">In order to solve the problem that the traditional building space configuration model cannot meet the optimization of building space characteristics, the author proposes the optimization of building space utilization based on spatialized particle swarm optimization. First, to solve the problem of optimal allocation of space units, the PSO algorithm is modified to encode the space units by means of character coding.Secondly, the maximum standardization method is used for data processing, and the factors affecting space utilization are summarized, the objective function of optimal allocation of architectural space is given from three aspects: economic benefits, social benefits and ecological benefits; Finally, by analyzing the advantages and disadvantages of master-slave parallel model and point-to-point parallel model, a chained parallel structure is proposed. The experimental results show that: The experimental data is based on the utilization of building space in these three regions in 2015, and the vector map is divided into 30 m × A grid of 30 m in size, and all statistical data and spatial data are projected on each grid cell. The difference between the fitness values of the final convergence of the three parallel models is small, and the main difference is the convergence speed. During the run time test, set the three parallel models to run under the conditions of 8, 16, 32 and 64 nodes respectively. Because of the combination of the advantages of master-slave model and point-to-point model, the running time of chained parallel model is significantly lower than that of the other two parallel models. Conclusion: Through data simulation test, it is verified that the chained parallel model has higher fitness, convergence speed and shorter running time, and its performance is better than the other two, indicating that the optimization algorithm proposed by the author has good performance.</p>Huan Huang
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425283283910.12694/scpe.v25i2.2484Optimization of Nonlinear Convolutional Neural Networks based on Improved Chameleon Group Algorithm
//www.scpe.org/index.php/scpe/article/view/2486
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">In order to solve the most difficult problem of the architectural model established by CNN in solving specific problems, which results in parameter overflow and inefficient training, an optimization algorithm for nonlinear convolutional neural networks based on improved chameleon swarm algorithm is proposed. This article mainly introduces the use of Chameleon Swarm Optimization (PSO) algorithm to research the parameters of CNN architecture, solve them, and achieve the optimization of the optimization model.Although the number of parameters that need to be set up in CNN is very large, this method can find better testing space for Alexnet samples with 5 different images. In order to improve the performance of the improved pruning algorithms, two candidate pruning algorithms are also proposed. The experimental results show that compared with the traditional Alexnet model, the improved pruning method improves the image recognition ability of the Caffe primary parameter set from 1.3% to 5.7%. This method has wide applicability and can be applied to most neural networks which do not require any special functional modules of the Alexnet network model.</p>Qingtao Zhang
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425284084710.12694/scpe.v25i2.2486Channel Estimation of Urban 5G Communication System based on Improved Particle Swarm Optimization Algorithm
//www.scpe.org/index.php/scpe/article/view/2507
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">In order to solve the problem that the channel estimation accuracy of the traditional urban communication system is not high, the author proposes the channel estimation of the urban 5G communication system based on the improved particle optimization algorithm. This method converts the channel estimate into a regression fit and adjusts the fit. Focusing on regression fitting problems, big data models are used to display offline data, study channel nonlinearities, and obtain initial channel prediction models. To solve the adaptive problem, the author collects real-time teaching data in a real online learning mode and integrates blended learning to update the model, to avoid the problem of overspending on offline training. Offline tests show that the performance of the channel estimation model is the best for different channels. As the signal-to-noise ratio increases, the MSE value is stable at around 1200. Conclusion: The channel estimation method can produce different characteristics of channel estimation in different situations and improve the signal recovery function of the communication system.</p>Xigang XiaBo YangZhiyu Liu
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425284885610.12694/scpe.v25i2.2507Multi-source and Multi-level Coordination of Energy Internet under V2G based on Particle Swarm Optimization Algorithm
//www.scpe.org/index.php/scpe/article/view/2509
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">In order to effectively improve the excessive load of microgrid during peak hours of urban electricity consumption, a multi-source and multi-level coordination method of energy Internet under V2G based on particle swarm optimization algorithm was proposed. First, a mathematical model for V2G energy integration in microgrids was developed and a scheduling concept based on a particle-by-particle optimization algorithm was used. Second, an improved PSO algorithm is proposed and experimentally validated, and the experimental results are compared with previous particle swarm optimization algorithms. Experiments have shown that as the number of iterations increases, the value of the objective function decreases and the optimal solution can be obtained until the maximum number of iterations is reached. The iteration speed and power processing cost of the improved PSO algorithm are better than before.The original load curve is the load trough period from 23:00 to 6:00, and two load peaks occur from 12:00 to 14:00 and 19:00 to 22:00. The V2G technology basically realizes the coordinated control of microgrid electric energy and achieves the effect of peaking and valley filling. The improved algorithm has obvious improvement compared with the original power grid state. Conclusion: The application of EV V2G technology can smooth the daily load curve of power grid and coordinate the electric energy of micro-grid to achieve ``peak cutting and valley filling'', and the effect of this algorithm is more outstanding than the previous algorithm. Finally, the future development direction and suggestions of V2G technology are put forward.The power grid with V2G discharge depth limit has the ability to basically reduce and eliminate the daily peak load, so the technology has broad research space and development prospects.</p>Jian XuYunyan ChangXiaoming Sun
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425285786610.12694/scpe.v25i2.2509Numerical Simulation and Optimal Control of Composite Nonlinear Mechanical Parts Casting Process
//www.scpe.org/index.php/scpe/article/view/2536
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">In order to understand the numerical simulation and optimal control of the casting process of Machine element, the author proposed a study based on the numerical simulation and optimal control of the casting process of composite Machine element. Firstly, the author analyzed the structural characteristics of composite Machine element, introduced them in detail, and studied their casting technology in combination with the actual situation. Secondly, the casting process of composite Machine element was simulated by using numerical simulation method, and the temperature field and flow field in the casting filling and solidification process were analyzed. Finally, selecting square cylinders with different wall thicknesses as typical components, the scheme design of traditional single casting process and multi material composite casting process based on dieless casting composite forming technology were carried out. Finite element numerical simulation and experimental research were conducted on the two processes, respectively. The results indicate that: The casting obtained by the multi material composite casting process almost solidifies simultaneously around 200 seconds later, and the graphite morphology around the casting is Type A, with a length of about 100 um, with small differences and uniform distribution; The minimum difference in tensile strength around the casting is about 3.8%, and the maximum increase in tensile strength value is 21%. This research achievement can provide technical reference for high-performance and high-quality casting of complex iron castings. In order to improve the casting quality, the author optimized the casting process of composite Machine element by numerical simulation.</p>Huan LiPeng Wang
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425286787310.12694/scpe.v25i2.2536Conservation Design of Industrial Heritage based on Nonlinear GA Optimization Algorithm and Three-dimensional Reconstructioneconstruction
//www.scpe.org/index.php/scpe/article/view/2537
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">In order to understand the industrial heritage protection design of Iterative reconstruction, the author proposes a research on industrial heritage protection design based on GA optimization algorithm and Iterative reconstruction. Firstly, the author establishes the 3D model of industrial heritage through Iterative reconstruction, and optimizes the model parameters through GA algorithm to achieve the purpose of protecting and utilizing industrial heritage. Secondly, the author proposes a method of Iterative reconstruction of industrial heritage based on GA algorithm, uses this method to conduct Iterative reconstruction of industrial heritage, and imports the reconstructed model into the 3D model management system for management. This method solves the problem of high reconstruction cost caused by low model quality in traditional Iterative reconstruction, and makes industrial heritage protection design more practical. Finally, an experimental analysis was conducted using a factory building in a certain city as an example. The results showed that the model optimized using the GA algorithm had significantly better performance than traditional reconstruction methods, and could more accurately reflect the spatial form and structural characteristics of industrial heritage, this provided new ideas and methods for the subsequent protection and utilization of industrial heritage. The GA algorithm optimized 3D model established by the author can effectively evaluate industrial heritage in historical urban areas, not only revealing the value of industrial heritage better, but also providing a certain reference for similar work in the future.</p> <p> </p>Yunan ZhaoPeng Bai
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425287488210.12694/scpe.v25i2.2537Lightweight Saliency Target Intelligent Detection based on Multi-scale Feature Adaptive Fusion
//www.scpe.org/index.php/scpe/article/view/2538
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">In order to solve the problems of small targets, variable shooting angles, and heights in drone images, the author proposes an adaptive drone target intelligent detection algorithm based on multi-scale feature fusion. The results show that after adding a deconvolution cascade structure to the network, mAP increased by about 2.5 percentage points and AP$^{50}$ increased by about 3 percentage points. Compared with Method 3, Method 4 uses GA-RPN instead of RPN, and when the IOU is 75, the AP increases by 3.5 percentage points, reflecting that the target prediction candidate boxes generated using semantic features adaptively match better than the manually designed target candidate boxes. This indicates that the proposed target detection framework has better classification ability and higher frame regression accuracy. Multi scale adaptive candidate regions are used to generate fused features of different scales generated by the network, weighted fused multi-scale features are used for target prediction, and semantic features are used to guide the network to adaptively generate target candidate frames, greatly enhancing the feature expression ability of various targets and improving the detection accuracy of aerial targets.</p>Muqing Zhu
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425288389010.12694/scpe.v25i2.2538Optimization of Radio Energy Transmission System Efficiency Based on Genetic Algorithm
//www.scpe.org/index.php/scpe/article/view/2586
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">In order to better understand the efficiency optimization problem of radio energy transmission systems, the author proposes a genetic algorithm based research on efficiency optimization of radio energy transmission systems. The author first addresses the issue of improving the efficiency of magnetic coupled radio energy transmission. On the basis of ensuring a certain transmission distance and voltage gain, the system is mathematically modeled using coupling circuit theory, and mathematical expressions such as transmission efficiency, transmission distance, and voltage gain are obtained as the objective functions of the algorithm. Secondly, the impact of metal obstacles on the transmission system was analyzed. Design a radio energy transmission compensation circuit, and through simulation, obtain three transmission system parameter schemes that meet the objective function and constraint conditions. Finally, the multi-objective genetic algorithm is used to optimize the system parameter design and obtain the optimal combination of transmission system parameters, with coupling coefficient k=0.1818 and mutual inductance coefficient M=23.165 × 10^(-5) H. Using multi-objective genetic algorithm, the algorithm has a fast convergence function in terms of the number of iterations, a non dominated function solution, and Pareto graphs have verified that the numerical value (3) in the text is the optimal combination design for the transmission system.</p>Ruijuan Du
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425289189910.12694/scpe.v25i2.2586Intelligent Detection and Analysis of Software Vulnerabilities based on Encryption Algorithms and Feature Extraction
//www.scpe.org/index.php/scpe/article/view/2587
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">Implement status detection of ship software, identify the source of faults in problematic software, and release new software versions. Based on the above requirements, the author regards the detection and control of ship software status as the core research content. Based on the actual operating environment of ship software, the functional requirements of software status detection were studied and analyzed, and a set of ship software status detection was designed and implemented, a software inspection and maintenance platform that integrates ship software operation and maintenance, as well as ship software version release and update. The author conducted practical verification of the SM3 and SM2 hybrid encryption algorithm and selected software on the ship for detection. After analyzing the experimental results, it has been proven that using a hybrid algorithm for encryption and decryption, the server can accurately obtain software information on the ship's platform, detect the software status on the ship, and locate specific problem files. For software that does not meet the standard status, the server can accurately transmit software information to the ``component integration framework'' and put the component in a ``prohibited'' scheduling state. After the server repairs the problematic software, the detection results of the software change and display as legal, while the software is in the ``allowed'' scheduling state in the ``component integration framework''.</p> <p style="-qt-paragraph-type: empty; -qt-block-indent: 0; text-indent: 0px; margin: 0px;"> </p>Heng LiXinqiang LiHongchang Wei
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425290090810.12694/scpe.v25i2.2587Application of Control Algorithm in the Design of Automatic Crimping Device for Connecting Pipe and Ground Wire
//www.scpe.org/index.php/scpe/article/view/2606
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">Due to the low effectiveness and poor quality of manual crimping of grounding wires, the author proposes the design of an automatic crimping device for connecting tube grounding wires based on intelligent fully automatic technology. The device consists of a microcontroller, an upper computer control interface, an electric push rod, an infrared sensor, a pressure transmitter, and other devices. The staff used the upper computer monitoring interface to set the relevant parameters for grounding wire crimping, and used X-ray digital imaging technology to measure the crimping size of the grounding wire. The size met the set parameter conditions. Through the PID control algorithm in the microcontroller, the stepper motor was controlled to push the clamp to move, completing the automatic crimping of the grounding wire. The X-ray detection method was introduced to detect the quality of the grounding wire after the crimping was completed. The experimental results show that the average deviation between the measured crimping size of the grounding wire and the actual measurement size by the automatic crimping device is only 0.06 mm, indicating that its measurement results are accurate; The success rate of crimping exceeds 95%. The above experimental results verify that the designed crimping device has high stability and reliability, and good quality detection effect.</p> <p style="-qt-paragraph-type: empty; -qt-block-indent: 0; text-indent: 0px; margin: 0px;"> </p>Congbing ShengPeng XingXiuzhong CaiZheng Shao
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425290991910.12694/scpe.v25i2.2606Linear Anti-interference Algorithm for Digital Signal Transmission in Fiber Optic Communication Networks based on Link Analysis
//www.scpe.org/index.php/scpe/article/view/2607
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">In order to achieve accurate transmission of protection signals in fiber optic communication networks, it is necessary to perform channel balancing configuration of fiber optic communication networks and adaptive forwarding control processing of relay protection signals, the author proposes an accurate transmission method for relay protection signals in fiber optic communication networks based on time-varying multipath fading suppression and adaptive beamforming. The system analyzes the sources of wireless long-distance pain signal interference signals, introduces anti-interference technologies such as two-dimensional joint processing (STAP), provides anti-interference algorithms and related gain analysis, and conducts signal processing gain simulation using MATLAB. Based on the analysis of comprehensive simulation results, at a given symbol length, the signal bandwidth increases, and the processing gain infinitely approaches the given theoretical limit value, rather than increasing nonlinearly. The reason is that the channel is affected by noise, and the channel estimation value and signal conjugate multiplication produce a noise quadratic term. At this point, the estimated value of the coherent region channel is reduced by the influence of noise, and the signal-to-noise ratio loss caused by the noise quadratic term is reduced, so the processing gain increases. During the process of infinite increase in signal bandwidth, the input signal-to-noise power ratio of the receiver tends to decrease towards an infinite value, limited by the size of the coherent region. The channel estimation value increases under the influence of noise, and the noise quadratic term is the main factor affecting the output noise power. When the symbol length is greater than the coherent time, the smaller the maximum Doppler frequency shift and the larger the coherent detection area, the greater the processing gain.</p>Jing WuCheng JinZiwu Wang
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425292092710.12694/scpe.v25i2.2607Network Traffic Monitoring and Real-time Risk Warning based on Static Baseline Algorithm
//www.scpe.org/index.php/scpe/article/view/2610
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">With the rapid growth of network traffic, in order to monitor network traffic, the author proposes a baseline based traffic inspection method. The main objective is to develop a global system for identifying malicious traffic, rather than a precise method for detecting the types of worms produced by malicious traffic. Although traffic is caused by the causes, network administrators can use this international search technique to detect malicious traffic data. The system based approach mainly includes designing time based on the traditional traffic model, detecting various equipments and network traffic process, and configuring the traffic flow according to each time frame. This method uses Cisco's NetFlow Collector, a NetFlow Collector (NFC), to collect raw NetFlow data transmitted by the device through UDP every 5 minutes. the Then, three-dimensional data such as communication port, communication time, and traffic flow (bytes or packets) is used to filter, remove the different values, calculate the base values, and compare the real-time results with the base values to check the traffic defects in the current network. If there are differences between the monitoring data and the system configuration at the same time, the system will issue an abnormal warning, and as time accumulates, the alarm level will gradually escalate.</p> <p style="-qt-paragraph-type: empty; -qt-block-indent: 0; text-indent: 0px; margin: 0px;"> </p>Zhaoli WuJunwei Liu
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425292893710.12694/scpe.v25i2.2610Application of Improved PSO and BP Hybrid Optimization Algorithm in Electrical Automation Intelligent Control
//www.scpe.org/index.php/scpe/article/view/2614
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">A fuzzy RBF-PID control strategy based on particle swarm optimization (PSO) algorithm is proposed to solve the problem of large inertia lag in temperature control system of industrial production refuse furnace. In this control system, an improved particle swarm optimization algorithm combined with inertia weight and genetic transformation was used to optimize the initial values of membership functions of fuzzy RBF (radial basis function). Then, BP (error backpropagation) algorithm is used for fine tuning, and fuzzy reasoning and RBF learning ability are combined to adjust the PID control parameters online to achieve the optimal PID control effect. The simulation results show that the algorithm has fast tracking, small overshoot, and is not easily trapped in local minima. At the same time, its robustness and anti-interference performance are better than traditional PID control.</p>Lijing LiXiaojian WangMei Yang
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425293894310.12694/scpe.v25i2.2614Design of Computer Information Management System Based on Machine Learning Algorithms
//www.scpe.org/index.php/scpe/article/view/2615
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">In order to improve the efficiency of office automation, regulate work frequency, and improve office efficiency, this paper presents a computer information management system design based on machine learning technology. Firstly, the basic design principles of computer information management systems are analyzed, and secondly, risk prediction is studied. The risk of computer information management systems is caused by the cross influence of different risk factor indicators, and has linear and nonlinear characteristics. Using a single prediction model cannot obtain accurate prediction results. Therefore, the risk prediction method for computer information management based on machine learning technology. The risk prediction method is established by using Analytic Hierarchy Method in machine learning algorithms, and the historical data is collected according to the index system. The weight of the initial prediction is determined by the combination of subjective and objective weight; In machine learning algorithms, risk prediction and benefit prediction are used as input and output methods for cloud machine learning. Through training and training, a risk prediction model is established to obtain higher prediction efficiency. The simulation results show that the prediction accuracy of this method is 95.5%, which can estimate the hazard existing in computer information management and improve the method.</p>Yan Li
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425294495110.12694/scpe.v25i2.2615Automatic Control of Low Voltage Load in Power Systems Based on Deep Learning
//www.scpe.org/index.php/scpe/article/view/2616
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">Due to the interference of false data, there is a large error in the mining results of low voltage loads in the power system. In response to this problem, the author proposes a design of an intelligent mining system for low voltage loads in the power system based on deep learning. Using ARM+DSP dual CPU structure, initializing the adapter agent, and using dual arm spiral antennas, designing a low-voltage load monitor to detect partial discharge signals in the 500-1500 MHz frequency band and suppress noise interference; By transmitting monitoring information to the intelligent switch through CAN bus or 485 bus, remote monitoring can be achieved; Based on the contact points and current characteristics of the circuit breaker, a current transformer has been designed to reduce the range of induced voltage variation; Construct a continuous set of functions MMD in the space, adjust the original network structure, establish a deep learning mining model, initial network parameters, eliminate false data in the network, optimize the network using target domain data, and combine mining engines to achieve intelligent data mining. According to the experimental results, the maximum difference between the load of phase A of the data processing system based on numerical simulation and the actual data is 1000 kVA at a time of 6 seconds; When the load of phase B is 4 seconds, the maximum difference between it and the actual data is 2000 kVA; When the load of phase C is 8 seconds, the maximum difference between it and the actual data is 2000 kVA. It has been proven that the mining error of the system is 0, and it has a precise mining effect.</p> <p style="-qt-paragraph-type: empty; -qt-block-indent: 0; text-indent: 0px; margin: 0px;"> </p>Yaohui SunHongyu ZhangHaolin LiShu WangChunhai Li
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425295296010.12694/scpe.v25i2.2616Target Image Processing Based on Super-resolution Reconstruction and Deep Machine Learning Algorithm
//www.scpe.org/index.php/scpe/article/view/2656
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">In dictionary-based single-frame image reconstruction algorithms, dictionaries rely on the design of artificial shallow features and are limited in their ability to represent image features. Therefore, this paper proposes a high-accuracy reconstruction method based on deep learning feature dictionary. This algorithm first uses a deep network to learn high-resolution and low-resolution training example images with deep features; Then co-train the feature dictionary under the super dense framework of the sparse dictionary; Finally, a single low-resolution image can be input and a super-resolution reconstruction can be performed using a dictionary. From the theoretical analysis, the introduction of deep network to extract the deep-level features of the image and its use in dictionary training is more beneficial to complement the high-frequency information in the low-resolution image. Experiments show that the proposed method achieves the best results in terms of both the peak signal-to-noise ratio and the gradient energy function of the reconstructed images. This shows that compared with traditional interpolation methods and some deep learning methods, the proposed method can recover image details to a high degree while preserving the original image damage information. This proves that the subjective visual and objective evaluation indicators of the algorithm presented in this article are higher than those of the comparative algorithm.</p>Yang LinPing ZhangHe ZhangGuoping Song
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425296197110.12694/scpe.v25i2.2656Interface Control and Status Monitoring of Electronic Information Equipment based on Nonlinear Data Encryption
//www.scpe.org/index.php/scpe/article/view/2521
<p>An advanced electronic information equipment interface control and status monitoring system is proposed to ensure the fairness, objectivity, and security of information while identifying responsibility for traffic accidents. Through an in-depth analysis of the system's security requirements and the current landscape of information security technology, a robust security strategy is developed for each crucial system stage. A PC-based platform is developed for efficient data acquisition, secure processing, reliable transmission, and fortified storage, focusing on implementing nonlinear data encryption methods. Performance evaluation of the system involved rigorous testing using files ranging from 3MB to 10MB. The results of the proposed system revealed a significant improvement in the system's overall speed and efficiency, showcasing an average performance enhancement of one quarter compared to the original platform. The proposed system demonstrated an impressive 15% to 30% increase in processing speed, establishing its capability to ensure data integrity protection during information transmission, facilitate accurate identification of data recording equipment post-accident, and safeguard the security of stored data. The developed electronic information equipment interface control and status monitoring system effectively addresses critical challenges associated with ensuring data integrity and security in traffic accident investigations.</p>Min YanHua Zhang
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425275175910.12694/scpe.v25i2.2521Detection and Prevention of Cyber Defense Attacks using Machine Learning Algorithms
//www.scpe.org/index.php/scpe/article/view/2627
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">Recent advancements in computing power, memory capacities, and connectivity have led to a corresponding surge in the utilization of big data, online platforms' prevalence, and machine learning's sophistication. Concerns regarding safety and the need for state-of-the-art security tools and methods to counter evolving cybercrime are amplified by the swift digitization of the world. This study investigates defensive and offensive applications of machine learning in cybersecurity. Additionally, it explores potential strategies to mitigate cyberattacks on machine learning models. The focus is on how machine learning facilitates cyberattacks, including developing intelligent botnets, advanced phishing using spear techniques, and deploying stealthy malware. Furthermore, the paper highlights the significance of artificial intelligence in digital safety, emphasizing its role in malware analysis, network vulnerability assessment, and threat prediction.</p>Yongqiang Shang
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425276076910.12694/scpe.v25i2.2627Security and Privacy of 6G Wireless Communication using Fog Computing and Multi-Access Edge Computing
//www.scpe.org/index.php/scpe/article/view/2629
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">The challenges surrounding the confidentiality of data transmission in the context of the upcoming sixth-generation (6G) wireless networks are proposed in this research. The study explores the potential role of blockchain systems in enhancing data security. It examines the integration of machine learning (ML) techniques to address the growing complexities of handling massive data volumes within the 6G environment. This research involves a comprehensive survey of existing strategies for maintaining data confidentiality in automotive communication systems. It further investigates an analysis of confidentiality approaches inspired by the 6G network architecture. The study examines the potential security implications of the Internet of Everything (IoE). It evaluates current research issues related to safeguarding data confidentiality within the framework of 6G communication among vehicles. The exploration involves reviewing ML techniques and their applicability in resolving the data processing challenges inherent in the 6G wireless network environment. The proposed work reveals the increasing complexity and variability of the 6G wireless network environment, leading to potential challenges in protecting private and confidential data during communication. It highlights the promising role of blockchain systems in addressing data security concerns within the 6G network context. Additionally, the study underscores the transformative potential of integrating ML techniques to handle the massive data volumes generated within the 6G ecosystem. The research highlights the importance of these technologies in mitigating data security risks and ensuring the confidentiality of information exchanged within the 6G communication framework.</p>Ting XuNing WangQian PangXiqing Zhao
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425277078110.12694/scpe.v25i2.2629Sustainable Development in Medical Applications Using Neural Network Architecture
//www.scpe.org/index.php/scpe/article/view/2631
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">The purpose of this research is to propose a methodology utilizing machine learning techniques to support medical organizations in effectively managing risks. Specifically, the study aims to connect social media data to identify and assess potential threats, ultimately enabling healthcare management to make informed decisions for their organizations and clients. The research employs machine learning algorithms to analyze user-generated content on social media platforms, generating comprehensive visual representations of various risk categories and their magnitudes. Additionally, the study utilizes data simplification techniques, including categorization, to streamline data processing and enhance overall efficiency. A computational framework is also developed, incorporating closed-form connections for threat assessment and evaluation. The study further empirically analyses the Consumer Value Stores (CVS) established for medical care in the United States. The findings reveal that prevalent threats within the lower quartile of client messages about CVS services include operational, financial, and technological risks. The severity of these risks is distributed among high risk (21.8%), moderate risk (78%), and minimal risk (0.2%). The research also presents several metrics to demonstrate the robustness of the proposed framework, confirming its effectiveness in effectively identifying and addressing potential threats. This research provides insights that can help healthcare management make informed decisions and foster a safer and more secure environment for their organizations and the people they serve.</p>Shuyi Jiang
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425278279110.12694/scpe.v25i2.2631Group Intelligent City Mobile Communication Network's Control Strategy based on Cellular Internet of Things
//www.scpe.org/index.php/scpe/article/view/2640
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">Mobile communication network optimization heavily depends on power control technology, which impacts the effectiveness of the network. This paper aims to enhance control over nonlinear mobile communication networks and achieve superior performance by applying the particle swarm optimization (PSO) algorithm in the control domain. Addressing limitations in the basic PSO algorithm, improvements are made and applied to urban mobile communication networks. The methodology involves modifying the PSO algorithm to address identified issues and applying the enhanced algorithm to communication network scenarios. Simulation results indicate that with an initial particle count of 10 and 100 iterations, the optimized values for and are 0.691 and 0.486, respectively, resulting in an objective function value of 55.514. This achievement validates the successful implementation of the optimization process for mobile communication network control. The findings reveal that the proposed grad particle swarm optimization (Grad-PSO) algorithm exhibits mobile network optimization by robust search capability and rapid convergence.</p>Jiazheng Wei
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425279279910.12694/scpe.v25i2.2640Improving the Efficiency and Reliability of Renewable Energy Systems
//www.scpe.org/index.php/scpe/article/view/2529
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">The implications of relevant sustainable practices have reflected the scholastic features to improve the environmental resources. The study highlights the importance of conservation of environment and power system in search of proven solutions to improve the penetration level. The need for flexibility has signified the special characteristics that are conventional in increasing the integrity of renewable resources. The ideologies have global trends have integrated the cost of affectivity with the growing applications of power projects. The architecture of wind and solar energy has touched successful benchmarks with respect to the real world implications. The conceptual practices help in initializing the practices towards biomass as well as determining the impact of renewable energy on wind and solar power energy in a significant manner. In addition to that, the applications of solar or photovoltaic cell have been mentioned in the study which has greater significance. The ideas based on the emission of greenhouse gases have been evaluated in the study that shows the after effects as well. The use of passive solar energy and active solar energy has clearly discussed the concept of sustainability and the process of administering towards various climatic conditions. Lastly, the impact of renewable resources on social, environmental, technical and economic aspects has verified the relevant practice of sustainability.</p>Xing ChenDingguo HuangQingchun RenYong YangYe Yuan
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425263764410.12694/scpe.v25i2.2529Implementation of Rules and Routines in Physical Education Teaching and Learning in China
//www.scpe.org/index.php/scpe/article/view/2530
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">Physical education is a significant aspect of the Chinese education system. Moreover, on a regular basis, physical education is an integral part of the Chinese education system. Therefore, the following study looks into different rules and routines for physical education teaching and learning in China. Most of the rules and regulations are formed based on guidelines provided by the Ministry of Education. Therefore, a systematic discussion regarding physical education training in China is conducted in the analysis. The significance of physical education (PE) in the Chinese education system cannot be overstated. It serves a dual purpose by not only promoting physical fitness but also fostering holistic personal development. PE contributes to the physical well-being of students, helping them lead healthier lives, but it also instills essential life skills like teamwork, discipline, and perseverance. In China, the Ministry of Education, as the overarching authority on educational matters, plays a pivotal role in shaping the rules and routines governing physical education. These rules encompass a wide array of aspects related to the curriculum, including the allocation of resources, curriculum design, assessment, and teaching methodologies. By adhering to these regulations, educational institutions across the country can ensure a standardized and comprehensive approach to physical education.</p>Huimin Zhang
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425264665210.12694/scpe.v25i2.2530Research on the Design of Integrated Energy Management and Optimization Control Systems for Novel Power Systems
//www.scpe.org/index.php/scpe/article/view/2531
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">This research paper is determining the impact of integrated energy management and optimization control for Novel power systems. Research objectives have been made based on integrating power system for reducing power consumption based on optimization parameters. Optimal control is mainly dealing with control law by finding out a given system and delivering certain criteria for achieving goals. There are mainly three parts for problem optimization purposes those are included decision, constraints and objective functions. Four different objectives has been made those are mainly discussed about the importance of the power systems regarding integrated management system. There are various factors that are affecting optimizations including the role of the multigrain recursion, global convergence, properties of the optimization model and local convergence. Along with this, the optimization control system has been played a crucial role for the power systems development purposes. Research has been based on the effective design of energy management for power systems. The primary application for the optimization techniques is based on the storage system, electromagnetic-based design and mapping design for the microwave structure. The rate of unsustainable energy management is increasing and sustainable energy management system is decreasing. The conclusion has been based on the significance of the optimization control for the power systems.</p>Zhiqian YangXianyou WuQiuhua ChenAisikaerLiangnian LvLei Wu
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425265366010.12694/scpe.v25i2.2531Stability Study of New Power System Based On Multi-Intelligent Body Collaboration
//www.scpe.org/index.php/scpe/article/view/2533
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">Developing, implementing, and maintaining a multi-intelligent body collaboration system necessitates significant investments in finances, time, and expertise. While multi-intelligent body collaboration has the potential to enhance power system stability significantly, it also comes with challenges related to interoperability, security, system complexity, and resource allocation. Resource allocation and training costs can be substantial. Addressing these challenges is crucial to harnessing the full benefits of this approach and ensuring the reliable and efficient operation of power systems. Effective communication and coordination strategies among intelligent agents are integral to maintaining power system stability. Timely information exchange, load balancing, disturbance management, and the integration of AI contribute to a more resilient and adaptive energy grid. As technology advances, refining these strategies will be essential to meet the growing demands of an ever-evolving power landscape. As technology marches forward, it becomes increasingly evident that the refinement of these strategies is paramount. The dynamism of the power landscape, driven by technological advancements and evolving needs, necessitates an agile and adaptable power system. The fusion of multi-intelligent bodies and modern technology stands as a testament to our collective pursuit of a more reliable, efficient, and sustainable energy future. In this ever-evolving landscape, the innovation and enhancement of these strategies are our compass, guiding us toward a brighter and more efficient future.</p>Xianyou WuZhiqian YangXin DuLiangnian LvAisikaerYanchen Yang
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425266166710.12694/scpe.v25i2.2533Study on Grid-Connected Power Quality Improvement of Wind Farms Based on Repetitive Controller
//www.scpe.org/index.php/scpe/article/view/2534
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">The project highlights the wind energy and the process followed in the wind farms in order to generate the energy by using wind power and wind capacity as well. Moreover, it has been observed that wind energy is also considered one of the most effective electrical energy sources. The application of wind energy can be beneficial for different aspects of the business sectors of different countries across the globe and this is possible because of its cost-effectiveness. Furthermore, this study encompasses the strategies employed within wind farms to preserve and efficiently harness wind power for the generation of electrical energy. Notably, the project delineates the utilization of a diverse array of instruments, each instrumental in the conversion of wind power into valuable wind energy. Beyond the realm of technology, wind farms grapple with a spectrum of formidable challenges, stemming from the inherently capricious nature of wind. In addition, various issues concerning power quality and the stability of the power system have also been identified within these wind farms. To ameliorate these multifaceted challenges, the project introduces a range of meticulous control measures for implementation.</p>Minjie ZhuLiangnian LvXubo LeAisikaerHaibo LiYucheng Gao
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425266867410.12694/scpe.v25i2.2534Smart Farming Using the Big Data-Driven Approach for Sustainable Agriculture with IOT
//www.scpe.org/index.php/scpe/article/view/2540
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">The study showcases the process of deep learning operated in agriculture, including Deep IoT, which makes the procedure easier using the deep neural system. The use of the IoT in the agrarian sectors makes the evolution of firms more effective. The application of the IoT detector supports the making of grade derivatives in the husbandry department. Marketing of crop finance is two other operations of smart agriculture that help for better harvest farming. Through the IoT technology in the farming industry, agriculturalists can get notifications about the temperature and climate. The method needs professional and qualified employees in the division to properly monitor the system and the methods.</p> <p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">The submission of the proper nourishment for the proper crop increases the life duration of the harvest and makes the crop free from menace. The velocity of the manufacture of undeveloped items can also be improved by using the IoT. The function of the BDA and IoT has enlarged for the healthier construction of farming items. The foreword of elegant farming in the rural industry requires more capable and qualified trainers to give the personnel proper teaching. The urbanization of the farming process and the use of elegant and modern technology are well-designed in the time of "Agriculture 3.0".</p>Buyu Wang
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425267568210.12694/scpe.v25i2.2540Scalable Solutions for Wind and Solar Distributed Generation
//www.scpe.org/index.php/scpe/article/view/2541
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">The main application of the informative data in the sector which are related to the energy are defines and explains as one of the crucial elements of Energy Internet. The advancement of the grid system are very vital as well as promising and faces many issues that are connected with the implementation of the renewable energy including solar and wind energy. The capacity of collecting of the data is the main elements of make are easy in taking decisions. The advancement of the technologies and its improvement has many benefits and advantages which was shown by the data analytic of the renewable source of energy in the various power stations. This is the framework which shows the development and growth of the potential establishment of the analyzation of the data and information in the smart grid and the utilities of power by the renewable resources. The seven domains and approaches are used for the purposed of predicting the stability, flexibility and safety from the advancement of the grid system. The secondary qualitative methods are used to define and explain the importance of the grid system in relation with the renewable source of energy that is wing and solar energy.</p>Jianzhong LiFeiping YangYu ChenLi AoWei Xiao
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425268369110.12694/scpe.v25i2.2541Physical Education Teaching Quality Evaluation Method Using Mobile Edge Computing in the Online and Offline Environment
//www.scpe.org/index.php/scpe/article/view/2542
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">The development of different technology has impacted the different stages of human life. Additionally, with the implication of computing technology sports and physical education can be improved. The use of mobile edge technology aids in gathering precise data related to physical education. Therefore, a specific improvement is possible. This following analysis has looked into the factor of mobile edge computing that aids in the evolution of physical education teaching. Moreover, the study has focused on developing an appropriate path for the integration of MEC into PE education. By utilising Mobile Edge Computing (MEC) technologies in both online and off-line learning contexts, this study offers a thorough way for assessing the quality of sports teaching. The suggested approach integrates multiple characteristics and performance measures to evaluate the efficacy of physical education instruction and participation, taking into consideration the changing educational landscape.</p>Huimin Zhang
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425269269910.12694/scpe.v25i2.2542Stability Study of Grid-Connected Power System for Wind Farms Considering Power Control
//www.scpe.org/index.php/scpe/article/view/2543
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">Wind energy has emerged as a pivotal practice in the contemporary energy landscape, generated through grid-connected power sources aligning with the vernacular principles of systemic approaches. This study explores the surge in initiation rates, offering insights into various factors impacting sustainable electricity production. Intriguingly, this research delves into the intricacies of managing the variability and uncertainty inherent in energy demand, catalysing the integration of grid-based solutions that enhance sustainability. It probes the dynamic nature of power supply paradigms, revealing a journey of continuous enhancement by applying cutting-edge resource methodologies. Amidst the backdrop of global shifts in electricity dynamics, this study uncovers the profound implications of energy depletion and wasteful consumption practices, spotlighting a burgeoning movement towards optimising grid electricity resources on a macro scale. The intricacies and nuances of power supply challenges are comprehensively dissected, offering valuable insights. Furthermore, the study explores the pivotal role played by information technology innovators in consolidating the predictability of wind energy, augmenting its viability. It also aligns with forward-looking reviews, underscoring the actionable strategies taken. The culmination of these efforts not only enhances predictability but also unlocks a spectrum of reflective and adaptive resources in wind energy utilisation.</p>Liangnian LvZhong FangZhiqian YangAisikaerLei WuYanchen Yang
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425270070810.12694/scpe.v25i2.2543 Research on Speech Communication Enhancement of English Web-based Learning Platform based on Human-computer Intelligent Interaction
//www.scpe.org/index.php/scpe/article/view/2544
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">This study presents a novel web-based learning platform that leverages human-computer intelligent interaction to enhance English communication skills. The platform integrates cutting-edge technologies to create an immersive learning experience, combining natural language processing, speech recognition, and interactive exercises. Learners engage in real-time conversations with virtual tutors, receive personalized feedback, and access a vast repository of educational resources. The platform not only facilitates language acquisition but also encourages self-paced learning, making it a valuable tool for both educators and students. By harnessing the power of artificial intelligence, this web-based platform represents a significant advancement in the realm of English language education. To overcome these issues this paper proposed SVM with an improved satin Bower bird optimization algorithm (SVM-ISBBO). SVM-ISBBO uses fog computing services that minimize the latency and speeds up the process, effectively handling huge wearable devices. In this proposed work SVM-ISBBO monitors the students communication, vocal parameters, blood pressure, etc, and these values are obtained from wearable sensor devices and their notifications are sent back to teachers. Teachers diagnosed the student information and sent back the alert notifications to the students for taking proper medications. All this information is stored in fog-based cloud storage in a secure manner. The accuracy rate of KNN got 78.56%, NB got 81.74%, SVM got 85.15% and the proposed work of SVM-ISBBO got 92.34%.</p>Yufang Gu
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425270972010.12694/scpe.v25i2.2544Optimization Study of Grid Access for Wind Power System Considering Energy Storage
//www.scpe.org/index.php/scpe/article/view/2545
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">This study gives an optimized study with details discussion of the access of wind power grid systems and the energy storage that is high in demand in recent days. The wind power system is a renewable energy resource that can help to meet the need or crisis of energy related to the fuel resources that are being increased in recent times. This is also helpful where energy access related to the power supply is difficult. This may also help the places where the shortage of energy and power supply is faced. There are some issues that are faced at different times that are related to the maintenance and handling of the power grid of the wind turbines. The technical process of handling the machines and the one-time cost of investment on it at the first time is also difficult. There are some benefits of the wind power grid with high energy storage capacity that may help to fulfill the demand for energy that is the main issue of the total system of power supply nowadays. The issues should be mitigated with the help of expert and at the coastal area where there is plenty of continuous flow of wind may be helpful with supply of power supply with wind. These advanced systems hold the potential to mitigate the pervasive energy demand issues plaguing contemporary power supply systems. By expertly addressing these challenges and strategically locating wind power grids, especially in coastal areas with consistent wind flow, the dependable supply of electrical energy can be significantly enhanced, thereby offering an effective solution to the prevailing energy supply challenges of our time.</p>Xubo LeQiuhua ChenMinjie ZhuYucheng GaoBingye ZhangAisikaer
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425272172810.12694/scpe.v25i2.2545Evaluation of Monitoring Technologies and Methods for Micro Plastics in Water as Novel Pollutants
//www.scpe.org/index.php/scpe/article/view/2548
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">Micro plastics have recently emerged as a major biohazard that has a considerable impact on the environment. Moreover, of the detrimental capabilities of micro plastics, a hope of controlling efforts of micro plastics has been in the headlines. microplastics have gained notoriety due to their adverse effects on the environment and wildlife. Controlling these minuscule yet harmful particles requires effective monitoring, detection, and management strategies. This analysis delves into the diverse techniques and technologies available for tracking and mitigating microplastic pollution. Therefore, the following analysis has aimed at analysing the monitoring technologies and methods for micro plastics. Additionally, the monitoring methods are observed along with the advantages and disadvantages. For the development of the analysis, a secondary qualitative method was used in the process. Additionally, the graphical representation of the efforts for controlling the novel pollutant is analysed along with relevant problems. Hence, a coherent discussion is presented in the following analysis. This research contributes to the broader understanding of microplastic pollution and its monitoring while underlining the need for enhanced control measures. It provides a valuable resource for policymakers, environmentalists, and researchers working toward a cleaner, more sustainable environment. As microplastics continue to infiltrate ecosystems worldwide, comprehensive monitoring and control efforts are of paramount importance</p>Ke HuDongdong LiXiaolei CuiDonghua HuJunliang ChenShaopeng ZhuanHao ChangYaping ZhangTingting AnJuqin Zhang
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425272973810.12694/scpe.v25i2.2548A Method for Specifying Yoga Poses Based on Deep Learning, Utilizing OpenCV and Media Pipe Technologies
//www.scpe.org/index.php/scpe/article/view/2590
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">Yoga is a years discipline that calls for physical postures, mental focus, and deep breathing. Yoga practice can enhance stamina, power, serenity, flexibility, and well‐being. Yoga is currently a well-liked type of exercise worldwide. The foundation of yoga is good posture. Even though yoga offers many health advantages, poor posture can lead to issues including muscle sprains and pains. People have become more interested in working online than in person during the last few years. People who are accustomed to internet life and find it difficult to find the time to visit yoga studios benefit from our strategy. Using the web cameras in our system, the model categorizes the yoga poses, and the image is used as input. However, the media pipe library first skeletonizes that image. Utilizing a variety of deep learning models, the input obtained from the yoga postures is improved to improve the asana. The algorithms like VGG16 (Visual Geometric Group), VGG19, Convo2d, CNN.</p>T AnuradhaN. KrishnamoorthyC.S. Pavan KumarL.V. Narasimha PrasadAnilkumar ChunduruUsha Moorthy
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425273975010.12694/scpe.v25i2.2590Text Summarization for Online and Blended Learning
//www.scpe.org/index.php/scpe/article/view/2556
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">Online learning text summarization is vital for managing the constant influx of online information. It involves condensing lengthy online content into concise summaries while retaining the original meaning and information. While several online summarization tools are available, they often fall short in preserving the underlying semantics of the text. In this paper, we introduce an innovative approach to online text summarization that strongly emphasizes capturing and preserving the semantics of the text. Our automatic summarizer leverages distributional semantic models to extract and incorporate semantics, producing high-quality online summaries. To evaluate the effectiveness of our online summarization system, we conducted experiments on a diverse range of online content. We employed ROUGE metrics, a popular evaluation method for text summarization, to assess our system's performance. Additionally, we compared our results with those of four state-of-the-art online summarizers. The outcome of our study demonstrates that our online summarization approach, which integrates semantics as a fundamental feature, outperforms other reference summarizers. This conclusion underscores the significance of leveraging semantics in the context of online learning text summarization. Furthermore, our system's ability to reduce redundancies in online content makes it a valuable tool for managing information overload in the digital age.</p>Mahira KirmaniGagandeep Kaur
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425297298610.12694/scpe.v25i2.2556The Effects of Integrated Feedback based on AWE on English Writting of Chinese EFL Learners
//www.scpe.org/index.php/scpe/article/view/2617
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">Through a two-semester experiment on the English writing of 64 Chinese EFL learners, this study examines the effects of three types web-based feedback (automatic feedback (AF), automatic feedback with teacher feedback (TF), automatic feedback with peer feedback (PF)) based on Pigai website. The results show that all the three modes of feedback can promote the writing of the English as Foreign Language (EFL) learners with different English proficiency ((high level: F = 2.672, P = .132; low level: F = .388, P = .766). The results also reveal that there is a significant difference in the high-level group between the automatic feedback + peer feedback group and the automatic feedback group (I - J = - 6.636, P = .000) as well as between the automatic feedback + teacher feedback group and the automatic feedback group (I - J = - 6.220, P = .001; I - J = - 5. 100, P =. 001), which indicate that automatic feedback + manual feedback (PF+TF) can promote the improvement of high-level learners’ English writing more than single AF. In the low-level group, between the AF + PF group and the AF group (I - J = -1.221, P = .925) and between the AF +TF group and the AF + PF group (I - J = 6.227, P =. 097). There is no significant difference, but there is a significant difference between the AF group and the AF + TF group (I - J = - 5.122, P =. 032), indicating that AF + TF is of great help in improving the English writing of low-level English learners.</p>Mei LiuChangzhong Shao
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425298799610.12694/scpe.v25i2.2617Research on Network Security Situation Awareness Technology Based on Security Intelligent Monitoring Technology
//www.scpe.org/index.php/scpe/article/view/2604
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">This paper uses data mining technology to dynamically monitor tobacco Industrial Enterprise' information systems. This paper builds an Internet security situation awareness system under a big data environment. The weight clustering method is used to classify users' network behavior. The spacing of weights is optimized to ensure the maximum difference in classification. Then, NAWL-ILSTM technology establishes a security situational awareness model for the Internet environment. In this project, the extended and short-memory Nadam optimal algorithm (NAWL) is used to realize data deep learning. Finally, the tobacco industry network security situation assessment method is designed to complete the dynamic monitoring of tobacco industry network security based on data mining. Simulation results show that the proposed method can effectively improve the safety evaluation performance of the system and reduce evaluation errors.</p> <p><span class="fontstyle0"> </span></p>Bingyu Yang
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-242521107111610.12694/scpe.v25i2.2604A Distributed System Fault Diagnosis System based on Machine Learning
//www.scpe.org/index.php/scpe/article/view/2622
<p>Now days distributed system becomes the mainstream system of information storage and processing. Compared with traditional systems, distributed systems are larger and more complex.However, the average probability of failure is higher and the difficulty, complexity of operation and maintenance are greatly increased. Therefore, it is necessary to use efficient methods to diagnose the system. Our aim is to use the trained model to diagnose the fault data of the distributed system, so we can obtain as high diagnostic accuracy as possible, and create a web side for users to use. The technique we proposed uses the integrated learning approach of Stacking to model the superposition of the raw data. To realize this, we trained with a dataset of 10,000 pieces of data and assessed accuracy every once in a while. Our best training results are about 80.69% accurate and can be used on the web side. By training data sets and analyzing distributed system faults with Stacking technology, a model with a test accuracy of 80.69% was obtained. Through this model and the web platform we built, the fault of distributed system can be diagnosed, and the diagnosis results are better than other models.</p>Yixiao Wang
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-242521117112310.12694/scpe.v25i2.2622Intelligent Navigation System based on Big Data Traffic System
//www.scpe.org/index.php/scpe/article/view/1124-1133
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">This paper studies the navigation technology of the On-Orbit Servicing Spacecraft (OSS) and proposes an overall solution for the OSS navigation system. The composition of the OSS satellite navigation simulation system and simulation supporting environment are studied. Then a new SSUKF navigation filter is proposed based on the hyper spherical distributed feature sampling point transform algorithm (SSUT) and the non-scanning Kalman filter (UKF). The numerical simulation of the OSS system is studied based on MATLAB/RTW. Finally, the SSUKF algorithm and the conventional UKF algorithm are simulated digitally. The effectiveness and advanced nature of the hybrid Kalman filter applied to spacecraft autonomous navigation are verified.</p>Xiu ZhangJian KangHaicun Yu
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-2425210.12694/scpe.v25i2.2654VisiSense: A Comprehensive IOT-based Assistive Technology System for Enhanced Navigation Support for the Visually Impaired
//www.scpe.org/index.php/scpe/article/view/2619
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">The field of visually impaired assistive technology looks for novel approaches to enhance independence and navigation. In this field, systems have to reliably identify and transmit environmental data in order to facilitate visually impaired users' safe and effective navigation. Developing a sophisticated framework for assistive technology that significantly enhances visually impaired navigation is the goal of this research. Make object detection and environmental awareness more efficient, dependable, and intuitive. This study introduces VISISENSE, "A Comprehensive IoT-Based Assistive Technology System for Enhanced Navigation Support for the Visually Impaired." VISISENSE is an IoT-based system with multiple components that enhances object detection. For primary environmental sensing, a handstick with implant sensors, a visual capture and transmission unit for processing visual data, and edge computing for object detection and classification are used. The system makes use of the R-CNN global computer vision model hosted on a cloud server, Mobinet computer vision models, and Logistic Regression with Iterative Learning. VISISENSE's effectiveness is demonstrated by a performance analysis of its object detection accuracy, processing speed, resource utilization, energy consumption, latency, and false positive rate. In all of these categories, VISISENSE performs better than Smart Stick and Smart Navigation. The data includes the fastest processing time of 17 ms, the most efficient resource utilization of 41%, and object detection accuracy of up to 99% at 2 Mbps load. Across all load conditions, VISISENSE has the lowest false positive rate, energy consumption, and latency. The VISISENSE assistive technology system is developed for the visually impaired. Its excellent object detection and navigation accuracy, speed, and efficiency enhance user experience and hold the potential to increase the independence and quality of life for visually impaired people. This research contributes to a significant advancement in assistive devices-smart, responsive technologies for the visually impaired.</p>Bhasha PydalaT. Pavan KumarK. Khaja Baseer
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-242521134115110.12694/scpe.v25i2.2619Optimum Batch Scheduling Model for Quality Aware Delay Sensitive Data Transmission over Fog Enabled IOT Network
//www.scpe.org/index.php/scpe/article/view/2620
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">The emerging fog networks in the internet of things (IoT) applications provide flexibility and agility for service providers. The combination of fog nodes and edge nodes enable them to deliver a given network service. However, the selection of suitable edge and fog nodes and their scheduling still remain a research challenge. Finding a globally optimal scheduling of oversized data transmission over IoT applications for industrial requirements is crucial. Optimal batch scheduling has been regarded as a viable way to achieve optimal scheduling in other contemporary network models. This manuscript has projected an Optimum Batch Scheduling Model (OBSM) for Quality aware Delay Sensitive Data Transmission over Fog Enabled IoT Networks. A novel clustering technique has been proposed in this manuscript to group the transmission nodes (fog or edge nodes) and data packets, which further pairs each group of data with one of the corresponding node group to achieve delay sensitivity and other quality factors such as energy efficiency. The data scheduling between data and node group is drawn from the previous contribution -"Quality aware Energy Efficient Scheduling Model (QEESM) for Fog Enabled IoT Network". The simulation results have shown that, in terms of average make span rate, average round trip time, and energy consumption, the batch scheduling model OBSM performs noticeably better than the contemporary scheduling models. The OBSM scheduling model's average make-span rate, roundtrip time, as well as energy consumption per make span are 23.3 7.03, 17.8 5.2, and 11.33 6.9 joules, respectively, which conclusively demonstrate that the OBSM model outperforms the existing models. A novel batch scheduling algorithm has been proposed using a unique unsupervised learning approach that suggested to cluster the transmission requests and transmission channels in to multiple clusters.</p>Narayana PotuChandrashekar JatothPremchand Parvataneni
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-242521152116610.12694/scpe.v25i2.2620Trajectory Interception Classification for Prediction of Collision Scope between Moving Objects
//www.scpe.org/index.php/scpe/article/view/2621
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">In the fields of autonomous navigation and vehicle safety, accurately predicting potential collision field points between moving objects is a significant challenge. A novel computing technique to enhance trajectory interception analysis is presented in this paper. Our objective is to develop a field model that can accurately forecast collision zones, improving road transportation safety and the use of autonomous cars. Our main contribution is a binary classification model called PCSMO (Prediction of Collision Scope between Moving Objects), which is based on zero-shot learning. Gann angles, which are typically 45 degrees, are used to analyze the trajectories of moving objects. This method is inspired by GANN (Gann Angle Numeric Nomenclature). Compared to earlier techniques, this model more accurately identifies potential collision collision interception zones. The technique computes Gann angles for trajectory analysis and extracts GPS coordinates of moving objects from video data using OpenCV. It offers a more sophisticated comprehension of object movement patterns and points of interception. To assess the precision, recall, F1-score, and prediction accuracy of our model, we employ 10-fold cross-validation. Comparing the PCSMO model to existing models, these metrics demonstrate how well the PCSMO model predicts potential collision zones. Our approach, we discovered, enhances trajectory analysis—a critical component of safer autonomous navigation systems. With potential applications in autonomous vehicle and UAV safety, the PCSMO model improves field interception classification.</p>B. Uma Mahesh BabuK. Giri BabuKrishna B. T.
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-242521167119010.12694/scpe.v25i2.2621Security Enabled New Term Weight Measure Technique with Data Driven for Next Generation Mobile Computing Networks
//www.scpe.org/index.php/scpe/article/view/2624
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">In the field of ASIC and FPGA, Machine Learning (ML) techniques play a major role and become predominant for accurate results for different applications like big data analysis and automotive electronics, and driverless vehicles which are required speed and power savings. Due to increasing the demand for higher accuracy, low power, low area consumption, and higher throughput for the complexity of the designs in the latest technology, the proposed system is fulfilling these demands in ASIC and FPGA domains, reconfigurable hardware architecture has been proposed it consists of an ML-based Support Vector Machine (SVM), high-speed AHB protocol and Floating point (FP) operations and also the system has the flexibility to communicate with I2C and I2S protocols. In order to increase throughput with minimal latency, the proposed architecture with AHB protocol and AHB to APB bridge is incorporated between the fabric dynamically reconfigurable multi-processor (FDPM) and peripherals along with security algorithms using SHA-256bits and AES. In order to perform ML-based applications, the proposed system is incorporated double-precision floating point (DPFP) arithmetic operations. The overall proposed architecture is developed in Verilog HDL and quality checking using the LINT tool and Clock Domain Crossing (CDC) using Spyglass tool and synthesized using DC compiler for ASIC and Vivado Design Suite 2018.1 for FPGA implementation and verification. The entire design is interfaced with the Zynq processor and SDK tool to verify data transfer between hardware and software. The obtained results show the generated custom accelerator is able to compute any complex ML classifiers for a larger amount of data. The obtained results are compared with existing state-of-art results and found that 18 % improvement in throughput, a 21 % improvement in power consumption savings, and a 34 % reduction in latency.</p>Anil Kumar BudatiShayla IslamMohammad Rafee ShaikChengamma ChittetiT. Lakshmi Narayana
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-242521191119810.12694/scpe.v25i2.2624Optimizing Multichannel Path Scheduling in Cognitive Radio Ad Hoc Networks using Differential Evolution
//www.scpe.org/index.php/scpe/article/view/2649
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">One important area of study in cognitive radio ad hoc networks is multi-channel path scheduling. Cognitive radio networks have trouble communicating and using the spectrum effectively because of weather and dispersion. Optimizing multichannel path scheduling enhances network performance and reliability in a cognitive radio ad hoc net. The Optimizing Multichannel Path Scheduling (OMPS) model methodically tackles various scheduling problems. For the OMPS model, this domain is new. It effectively resolves multichannel path scheduling. The computer method used in the study is called Differential Evolution. During optimization, several factors are carefully considered, including Channel Fade Margin, Cross-Correlation and Coherence Time, Spectral Efficiency, Interference Level, Power Consumption, Retransmission Rate, Access Probability, and Propagation Delay. To increase the scheduling efficiency of the DE algorithm, many steps are meticulously planned: initialization, mutation, crossover, fitness evaluation, selection for iteration evolution, and termination. Latency, Packet Delivery Ratio (PDR), Spectrum Utilization, Interference Level, Energy Efficiency, and Established Path Success Rate are all assessed by the OMPS model. These indicators assess the effectiveness and dependability of the network. OMPS performs better in crucial simulations than the existing model. The demonstration demonstrates decreased latency for real-time applications, greater packet delivery ratios (PDRs), improved spectrum efficiency, channel interference, energy efficiency, and connection formation odds, as well as increased throughput that enhances network resource utilization. To do this, a variety of multichannel path scheduling situations are simulated.</p>Ramesh DasariVenkatram N
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-242521199121810.12694/scpe.v25i2.2649Iterative Ensemble Learning over High Dimensional Data for Sentiment Analysis
//www.scpe.org/index.php/scpe/article/view/2650
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">For sentiment analysis in particular, the problem of processing and analyzing high-dimensional data becomes more prominent in recent past. This is where the IEL-HDDSA model, which aims to increase accuracy and performance in complex, high-dimensional data streams sentiment analysis comes into play. Iterative approach in ensemble learning; a contribution to the field. It integrates preprocessing techniques such as tokenization, stop word removal, lemmatization and the collection of sentiment-related features. Then the training corpus is divided by label, and features with high mutual information are selected. Highly replicated points of data for model training can also be identified at this point. First a Naive Bayes model is trained, then later it's placed in an ensemble as part of bagging. Its major advantage over earlier methods is that IEL-HDDSA can iteratively train on selected subsets of data until the performance in sentiment analysis for high-dimensional objects reaches an optimum level. A 10-fold cross validation method was used to rigorously evaluate the performance of this model, which showed consistently high levels of operation with almost no variation across different measures. IEL-HDDSA's precision ranged from 0.9359 to 0.9492, and its specificity was between 0. Its accuracy differed from 0.93 to around 0.95, and its F1-measure fluctuated between the values of about 0.94 and above; so here too balance was well maintained in a manner that satisfied both precision and recall requirements equally. The false alarming rate fell from 0.056 to 0.1, a fairly low ratio of incorrect positive classifications; Moreover, MCC quantities ranged from 0.8668 to 0. These results testify to the IEL-HDDSA model's stable effectiveness and high reproducibility in sentiment analysis applications, especially for massive data flows.</p>V R N S S V Saileela PN. Naga Malleswara Rao
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-242521219123410.12694/scpe.v25i2.2650Optimal Usage of Resources through Quality Aware Scheduling in Containers based Cloud Computing Environment
//www.scpe.org/index.php/scpe/article/view/2655
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">For cloud computing, the Quality Aware Scheduling of Containers (QASC) model has been proposed for delay-sensitive tasks. Plan your cloud tasks. Typically, time constraints are met while resources are used effectively. It's a really difficult undertaking. In order to distribute containers more effectively, QASC takes into account a number of performance factors. Containers and their make-span logs, as well as input quality metrics like I/O-intensive workload, startup time, hot standby failure rate, and inter-container dependencies, are collected by the QASC model. A metric coefficient that indicates each container's overall rating is calculated by normalizing and averaging these values to determine it. In order to determine how well scheduling performed, the model also includes a quality coefficient that calculates this metric-coefficient threshold. It's also critical for QASC to be able to determine the remaining energy in each container, which represents its request capacity. In order to optimize cloud resources, energy use is also taken into account by the model. From the cloud-sim simulation, an experimental dataset including 50 containers and 1,200 internet protocol-capable users was employed. For the make-span ratio, round-trip time, and energy consumption analysis, this produced 20,000 data points. The RLSched, DSTS, and ADATSA models were contrasted with the QASC model. The outcomes showed that QASC performed better than these models in a number of crucial areas. Tasks may be managed better with the higher average make-span ratio and lower volatility. Its superior job scheduling and resource use were further demonstrated by its shorter round-trip durations and lower energy usage across loads. The QASC model is an extremely complex scheduling method for container-based systems and a significant advancement in cloud computing research. Its approaches and methods enable for more intelligent energy use as well as high-quality services while also improving system performance, particularly for tasks that are delay-sensitive.</p> <p style="-qt-paragraph-type: empty; -qt-block-indent: 0; text-indent: 0px; margin: 0px;"> </p>Poojitha S.A.Ravindranath K
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-242521235124510.12694/scpe.v25i2.2655Enhanced Feature Optimization for Multiclasss Intrusion Detection in IOT Fog Computing Environments
//www.scpe.org/index.php/scpe/article/view/2657
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">To overcome the shortcomings of traditional security measures in IoT fog computing, The Multiclass Intrusion Detection (MCID) model is put forward. The model's goal is to improve intrusion detection by identifying and classifying different attack types. The behavioral, temporal and anomaly features are fused through SVM-BFE for obtaining the best possible selection of high worth features. Finally we use a Random Forest algorithm to robustly classify them. There is also its adaptability to the ever-changing security demands of fog computing. MCID's ability to improve the cloud security of fog computing is shown by a 4-fold cross validation, which returns performances including precision rates up to 99.43%, recall about 95% and F-measures at as much as 97.17%. Moreover there are specificity rate totals coming in over this whole range that hit close or right.</p>Sudarshan S. Sonawane
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-242521246126310.12694/scpe.v25i2.2657Quality Enhancement with Frame-wise DLCNN using High Efficiency Video Coding in 5G Networks
//www.scpe.org/index.php/scpe/article/view/2658
<p style="-qt-block-indent: 0; text-indent: 0px; margin: 0px;">In the present situation, applications related to multimedia are discovered to be comfortable with the use of video. The number of end consumers who use video continues to rise every day. People are presently searching for videos with better quality among the ones that are currently available there. This results in the launch and dissemination of HD (high definition) videos. Ultra high-definition (UHD) videos are becoming more and more popular as a result of this advancement and need. However, as video communication keeps expanding, there is an upsurge in network traffic because of the limited bandwidth, especially among smart cities. Different advancement codecs have been suggested to deal with the data stream to overcome this hazardous circumstance. However, the fact that modern UHD videos have huge amounts of data makes the available codecs even more complicated. UHD videos can be processed with the latest improvement codec, H.265/High-efficiency video coding (HEVC). Nevertheless, it is impacted by increased power consumption and intricate calculations. Limitations in the codec's functionality confine its use to specific applications, preventing its application in wireless, mobile, or portable settings. Hence, this research concentrates on implementing frame-level quality enhancement through a deep learning network known as FQE-Net. The deep learning convolutional neural network (DLCNN) is specifically crafted to manage films with resolutions up to 16K. Its primary objectives include reducing complexity, minimizing artifacts, enhancing the efficiency of the HEVC codec, and compacting energy consumption. To achieve superior efficiency, it is imperative to replace the DWT transforms within the HEVC codec with a DLCNN model. Additionally, incorporating the Content Block Search Algorithm for Motion Estimation and Compensation, alongside filtering techniques like Sample Adaptive Filter and Deblocking Filter, becomes essential. The simulation results showed that the suggested FQE-Net performed better than the conventional techniques.</p>Vijaya Saradhi DommetiM. DharaniK. ShasidharY Dasaratha Rami ReddyT. Venkatakrishna Moorthy
Copyright (c) 2024 Scalable Computing: Practice and Experience
2024-02-242024-02-242521264127510.12694/scpe.v25i2.2658