Acceptance rates of IEEE and Elsevier differ in varying percentages. However, some among them are much easier for acceptance with quick turnaround times:
A paper on deep reinforcement learning (for autonomous robots) was accepted with minor revisions as a fast review process took place due to the almost flexible scope of this journal for AI research.
Some of the more common reasons behind rejection from above-mentioned journals about AI/ML include:
Papers failing to deliver any desktop insights or considerable contribution will usually receive a rejection.
A paper proposing a basic linear regression model without novel contributions or enhancements could be rejected for not offering significant advancements.
If the paper is not in focus with the journal specific area, that is, either NLP or computer vision, it can be rejected from the desk.
An AI ethics paper may well find rejection by a technical journal such as the IEEE Transactions on Neural Networks on account of the journal’s lack of interest in the matter.
Journals expect solid experimental results with reasonable metrics and benchmarks demonstrating the proposed methods.
A paper with poor benchmarking results or missing critical comparisons to established models such as ResNet or VGG-16 may face rejection.
Errors detract from the paper’s excellence and result in outright rejection.
A well-organized paper is essential for a successful submission:
Abstract & Keywords: The concise summary of the research problem, approach, and results.
“In this paper, we propose a novel deep learning-based approach to enhance predictive accuracy in real-time traffic management. Keywords: AI, deep learning, traffic prediction.” Example Abstract Structure is shown below for your reference.
Introduction: The problem statement, contribution, and importance should be clearly obvious.
“The traffic prediction problem has long been a challenge in smart city planning. This paper introduces an AI model that outperforms existing approaches by incorporating real-time sensor data and machine learning techniques.”
Literature Review: Previous related work should be discussed-introducing the research.
Methodology: Describe your model, algorithms, or techniques, often supported with diagrams.
Experiments & Results: Data-driven results should be presented with well-defined metrics.
Discussion: The analysis of the results and placing them in context.
“The model achieved 95% accuracy in predicting traffic patterns, outperforming traditional models like ARIMA and LSTM-based models.”
Conclusions & Future Scope: Summary of the results and directions for future studies.
References: IEEE style or APA citation style, but they have to be consistent.
High-end journals generally accept 30-40% novelty. These include:
New algorithms or substantial improvement of existing techniques
Proposing a hybrid deep learning model combining CNNs and RNNs for video classification, rather than using traditional CNNs alone.
New applications of the existing models on new domains or problems.
Applying reinforcement learning in robotic control for real-time decision making in autonomous vehicles, an emerging area.
New dataset or unique fusion of several datasets that brings unexplored research areas.
Creating a new dataset for AI-driven medical image segmentation that improves the availability of high-quality labeled data.
The novelty needs to be clearly defined and supported by strong experimental results.
It’s not mandatory, but including codes and datasets is usually encouraged as it helps in making the work reproducible.
GitHub repository: A lot of journals encourage linking with one’s code residing on other platforms, such as GitHub.
A paper on AI for autonomous drone navigation could include a link to the GitHub repository containing the trained model and evaluation code.
Public datasets: Journals feel that any new dataset either used by or contributed toward the research enhances the article quality for submission.
A study on AI in healthcare diagnostics could benefit from using publicly available datasets like ImageNet or Kaggle medical datasets.
Reproducibility: Code can ensure that other researchers can repeat your work purposely, which increases the scientific value of your paper.
Choosing the right journal involves certain aspects. They include:
Scope-aligned journal: Make sure to check whether the journal shows any of your study areas in its scope (NLP, computer vision, or AI ethics).
If your work focuses on natural language processing, journals like Elsevier’s Computer Speech & Language are a better match than a general AI journal.
Impact Factor or CiteScore: Journals that are high in impact factor are most competitive, but at the same time, they do give a better reach.
For high-impact research on AI in healthcare, you might aim for IEEE Transactions on Biomedical Engineering (high impact factor).
Acceptance ratio: There are journals like IEEE Access, which have really quick review and acceptance processes.
Open Access versus Subscription: Open access will improve the reach of your work but come with some publication fees.
IEEE Access: This takes about 6-10 weeks for an initial review.
A paper on AI-based anomaly detection in sensor networks was reviewed and accepted in approximately 2 months.
Elsevier Applied Intelligence: Review durations generally take 2-4 months.
A paper on predictive modeling in machine learning underwent review in about 3 months.
Pattern Recognition (Elsevier): 3-6 months, because of difficulty in reviews.
Review times depend on reviewer availability and manuscript quality.
Yes, but:
Make sure the manuscript has to be very different from the thesis according to journal criteria.
A paper derived from a thesis on AI in healthcare could be submitted if it’s rewritten to focus on new methodologies for patient diagnosis, ensuring it differs from the thesis content.
Professional editing will be recommended if English is not your first language. Research may be sound but poorly done grammar or unclear writing leads to desk rejections.
Editing:
Professional editing services effectively increase acceptance chances.
Example: A masterfully written paper on AI in autonomous vehicles may still be rejected for poor grammar and ambiguities.
The acceptance rate depends on:
A paper on AI-powered real-time speech recognition with solid experimental results will likely be accepted over one that lacks novelty or validation.
Being a PhD scholar is not as important as presenting high-quality research with credible structure.