Access the full text.
Sign up today, get DeepDyve free for 14 days.
References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.
Although the performance of the object detection has been significantly optimised in recent years, there is still a lot of room for designing multi-scale feature fusion methods and designing loss functions. Specifically, we propose Multiple Trick Feature Pyramid Networks (MT-FPN), by using various techniques such as feedback information, global module, attention mechanism, and fusion of refined information, to solve the problem of insufficient multi-scale feature fusion. We also propose Dynamic Balanced L1 Loss (DBLL), by utilising dynamic strategies and solving the derivative discontinuity problem, in order to help relieve the inconsistent problem between the dynamic training process and the fixed parameters. Moreover, by replacing FPN with MT-FPN, our Average Precision (AP) on Microsoft Common Objects in Context (MSCOCO) is 5.1 points and 3.8 points higher than FPN Faster R-CNN and Libra R-CNN, respectively. Without any bells and whistles, our experiments also show that the combined application of MT-FPN and DBLL achieves competitive performance compared with most advanced detectors on MS COCO benchmark.
International Journal of Wireless and Mobile Computing – Inderscience Publishers
Published: Jan 1, 2022
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.