IEEE Computational Intelligence Magazine

magazine

 

 2022 1 CIM CoverThe IEEE Computational Intelligence Magazine (CIM) publishes peer-reviewed articles that present emerging novel discoveries,  important insights, or tutorial surveys in all areas of computational intelligence design and applications, in keeping with the Field of Interest of the IEEE Computational Intelligence Society (IEEE/CIS). Additionally, CIM serves as a media of communications between the governing body and its membership of IEEE/CIS. Authors are encouraged to submit papers on applications oriented developments, successful industrial implementations, design tools, technology reviews, computational intelligence education, and applied research.

Contributions should contain novel and previously unpublished material. The novelty will usually lie in original concepts, results, techniques, observations, hardware/software implementations, or applications, but may also provide syntheses or new insights into previously reported research. Surveys and expository submissions are also welcome. In general, material which has been previously copyrighted, published or accepted for publication will not be considered for publication; however, prior preliminary or abbreviated publication of the material shall not preclude publication in this journal.


Impact Score

CIM 2022

Journal Citation Metrics Journal Citation Metrics such as Impact Factor, Eigenfactor Score™ and Article Influence Score™ are available where applicable. Each year, Journal Citation Reports© (JCR) from Thomson Reuters examines the influence and impact of scholarly research journals. JCR reveals the relationship between citing and cited journals, offering a systematic, objective means to evaluate the world's leading journals.  Find out more about IEEE Journal Rankings.

Special Issues

Artificial Intelligence eXplained (AI-X)  [Call for Papers]
Guest Editors: Pau-Choo Chung, Alexander Dockhorn, Jen-Wei Huang
Submission Deadline: March 31, 2023 April 15, 2023
Supporting Files: Template Files, Author Instructions
Tutorial Webinar: Introducing Immersive Articles and How to Write Them

Data-Driven Learning For Autonomous Driving
 [Call for Papers]
Guest Editors: Qichao Zhang, Zhen Ni, Danial Prokhorov, Dacheng Tao
Submission Deadline: May 31, 2023

 

Featured Paper

Bridging the Gap Between AI and Explainability in the GDPR: Towards Trustworthiness-by-Design in Automated Decision-Making
Ronan Hamon, Henrik Junklewitz, Ignacio Sanchez, Gianclaudio Malgieri, and Paul De Hert
IEEE Computational Intelligence Magazine (Volume: 17, Issue: 1, Feb. 2022)

Abstract: Can satisfactory explanations for complex machine learning models be achieved in high-risk automated decision-making? How can such explanations be integrated into a data protection framework safeguarding a right to explanation? This article explores from an interdisciplinary point of view the connection between existing legal requirements for the explainability of AI systems set out in the General Data Protection Regulation (GDPR) and the current state of the art in the field of explainable AI. It studies the challenges of providing human legible explanations for current and future AI-based decision-making systems in practice, based on two scenarios of automated decision-making in credit scoring risks and medical diagnosis of COVID-19. These scenarios exemplify the trend towards increasingly complex machine learning algorithms in automated decision-making, both in terms of data and models. Current machine learning techniques, in particular those based on deep learning, are unable to make clear causal links between input data and final decisions. This represents a limitation for providing exact, human-legible reasons behind specific decisions, and presents a serious challenge to the provision of satisfactory, fair and transparent explanations. Therefore, the conclusion is that the quality of explanations might not be considered as an adequate safeguard for automated decision-making processes under the GDPR. Accordingly, additional tools should be considered to complement explanations. These could include algorithmic impact assessments, other forms of algorithmic justifications based on broader AI principles, and new technical developments in trustworthy AI. This suggests that eventually all of these approaches would need to be considered as a whole.

Index Terms: Law, Decision making, Data models, General Data Protection Regulation, Machine learning algorithms, Deep learning, Security, Decision making, Data models, COVID-19

IEEE Xplore Linkhttps://ieeexplore.ieee.org/document/9679770

 

Difficulties in Fair Performance Comparison of Multi-Objective Evolutionary Algorithms [Research Frontier]
Hisao Ishibuchi, Lie Meng Pang, and Ke Shang
IEEE Computational Intelligence Magazine (Volume: 17, Issue: 1, Feb. 2022)

Abstract: The performance of a newly designed evolutionary algorithm is usually evaluated by computational experiments in comparison with existing algorithms. However, comparison results depend on experimental setting; thus, fair comparison is difficult. Fair comparison of multi-objective evolutionary algorithms is even more difficult since solution sets instead of solutions are evaluated. In this paper, the following four issues are discussed for fair comparison of multi-objective evolutionary algorithms: (i) termination condition, (ii) population size, (iii) performance indicators, and (iv) test problems. Whereas many other issues related to computational experiments such as the choice of a crossover operator and the specification of its probability can be discussed for each algorithm separately, all the above four issues should be addressed for all algorithms simultaneously. For each issue, its strong effects on comparison results are first clearly demonstrated. Then, the handling of each issue for fair comparison is discussed. Finally, future research topics related to each issue are suggested.

Index Terms: Social factors, Evolutionary computation, Robustness, Statistics, Optimization, Convergence

IEEE Xplore Linkhttps://ieeexplore.ieee.org/document/9679762