Export 7153 results:
Sort by: Author Title [ Type  (Asc)] Year
Bakr, A. A., M. H. Fahmy, A. S. Elward, H. A. Balamoun, M. Y. Ibrahim, and R. M. Eldahdoh, Analysis of Medium-Term Weight Regain 5 Years After Laparoscopic Sleeve Gastrectomy, , 6, 2019. Abstract
Agarwal, R. P., M. Bohner, S. R. Grace, and D. O’Regan, Annals of the University of Craiova, Volume 33 (2006), , 2006. Abstract
Elfiky, A. A., "Anti-HCV, nucleotide inhibitors, repurposing against COVID-19", Life Sciences, vol. 248, 2020. Abstract
Elfiky, A. A., E. B. Azzam, and M. W. Shafaa, "The anti-HCV, Sofosbuvir, versus the anti-EBOV Remdesivir against SARS-CoV-2 RNA dependent RNA polymerase in silico", Molecular Diversity, vol. 26, issue 1, pp. 171 - 181, 2022. Abstract
Smolinski, T. G., M. G. Milanova, and A. - E. Hassanien, Applications of Computational Intelligence in Biology: Current Trends and Open Problems, : Springer, 2008. Abstract
Smolinski, T. G., M. G. Milanova, and A. - E. Hassanien, Applications of Computational Intelligence in Biology: Current Trends and Open Problems, : Springer, 2008. Abstract
Corcoran, J., and H. Dawood, Applied Logic Flowchart, , December, 2018. Abstractapplied_logic_flowchart.pdf

The dynamically combined deductive and hypothetico-deductive method has been available to objective investigators since ancient times. Only in the last half-century has it been taught in courses on scientific method and critical thinking. The below chart for teaching and applying it is only about thirty years old.

Galal-Edeen, G. H., Architectonics, {Cultures} and {Other} {Stories}, , London, University of North London, jun, 2002. Abstract
Galal, G. H., Architectural {Theory}: {Grounding} or {Stipulation}? {A} paradigmatic comparison between the work of {Christopher} {Alexander} and {Henry} {Glassie}, , no. RN/98/130, London, University College London, Univeristy of London, pp. Department of Computer Science, dec, 1998. Abstract
Abbas, M., A. Amer, and H. Rakha, "Assessing the Impact of ADA Recommended Pedestrian Walking Speed on the Operation of Diamond and SPUI Interchanges", ITE 2008 Annual Meeting and Exhibit Compendium of Technical Papers, Anaheim, CA, Institute of Transportation Engineers, pp. 12, 2008. Abstract
Sayed, H. S., M. M. Megahed, H. H. Omar, and I. M. Ismael, Assessment of cathode swelling pressure using nonlinear finite element technique, : Minerals, Metals and Materials Society, Warrendale, PA (United States), 1996. Abstract
Ayoub, D. R., Zeinab Sarhan, Mohsen Askar, and F. Zyada, "Assessment of coping strategies in a sample of risky suicidal Egyptian psychiatric outpatients", EGYPTIAN JOURNAL OF PSYCHIATRY, vol. 34, issue 3, pp. p. 177-185], 2013.
Abdullah, A., "ATPMat®: An Open Source Toolbox for Interfacing ATP and Matlab®",, 2015. Abstract
El Maghraby, E. E., A. M. Gody, and H. M. Farouk, "Audio-Visual Speech Recognition Using LSTM and CNN", (Formerly <a href='/journal/99' class="text-white">Recent Patents on Computer Science</a>), vol. 14, issue 6, pp. 2023 - 2039, 2021. Abstract

Background: Multimodal speech recognition is proved to be one of the most promisingsolutions for robust speech recognition, especially when the audio signal is corrupted by noise. As
the visual speech signal not affected by audio noise, it can be used to obtain more information used
to enhance the speech recognition accuracy in noisy system. The critical stage in designing robust
speech recognition system is choosing of reliable classification method from large variety of available
classification techniques. Deep learning is well-known as a technique that has the ability to
classify a nonlinear problem, and takes into consideration the sequential characteristic of the speech
signal. Numerous researches have been done in applying deep learning to overcome Audio-Visual
Speech Recognition (AVSR) problems due to its amazing achievements in both speech and image
recognition. Even though optimistic results have been obtained from the continuous studies, researches
on enhancing accuracy in noise system and selecting the best classification technique are
still gaining lots of attention.
Objective: This paper aims to build AVSR system that uses both acoustic combined with visual
speech information and use classification technique based on deep learning to improve the recognition
performance in a clean and noisy environment.
Methods: Mel Frequency Cepstral Coefficient (MFCC) and Discrete Cosine Transform (DCT) are
used to extract the effective features from audio and visual speech signal respectively. The audio
feature rate is greater than the visual feature rate, so that linear interpolation is needed to obtain
equal feature vectors size then early integrating them to get combined feature vector. Bidirectional
Long-Short Term Memory (BiLSTM), one of the Deep learning techniques, are used for classification
process and compare the obtained results to other classification techniques like Convolution
Neural Network (CNN) and the traditional Hidden Markov Models (HMM). The effectiveness of
the proposed model is proved by using two multi-speaker AVSR datasets termed AVletters and
Results: The proposed model gives promising results where the obtained results In case of GRID,
using integrated audio-visual features achieved highest recognition accuracy of 99.07% and 98.47%
, with enhancement up to 9.28% and 12.05% over audio-only for clean and noisy data respectively.
For AVletters, the highest recognition accuracy is 93.33% with enhancement up to 8.33% over audio-
Conclusion: Based on the obtained results, we can conclude that increasing the size of audio feature
vector from 13 to 39 doesn’t give effective enhancement for the recognition accuracy in clean
environment, but in noisy environment, it gives better performance. BiLSTM is considered to be
the optimal classifier for a robust speech recognition system when compared to CNN and traditional
HMM, because it takes into consideration the sequential characteristic of the speech signal (audio
and visual). The proposed model gives great improvement in the recognition accuracy and decreasing
the loss value for both clean and noisy environments than using audio-only features. Comparing
the proposed model to previously obtain results which using the same datasets, we found that our
model gives higher recognition accuracy and confirms the robustness of our model.