![]() Our data-driven approach consists of an offline learning part that learns the color distributions for different objects in the real world, and an online recoloring part that first recognizes the object category, and then recommends appropriate realistic candidate colors learned in the offline step for that category. To address this issue, we introduce a palette-based approach for realistic object-level image recoloring. Unlike artists who have an accurate grasp of color, ordinary users are inexperienced in color selection and matching, and allowing non-professional users to edit colors arbitrarily may lead to unrealistic editing results. Extensive experiments show that the proposed method generates far better textile recoloring results compared to existing approaches.Įxisting color editing algorithms enable users to edit the colors in an image according to their own aesthetics. A user study is performed to test the GUI indicators and the naturalness of the recoloring results. In addition, we show how to generate images recolored both correctly and naturally through interactive alpha-matting, instead of aiming only at efficiency in recoloring, which sometimes results in unnatural images. Our method can process textile images composed of only two color elements while existing methods struggle. Our proposed method can overcome the limitations of too few textile colors. The conventional method using convex-hull assumes that an image is composed of at least four color elements. ![]() Polarization image analysis also contributes to highly accurate region segmentation of warp and weft yarns. By polarization image analysis, the proposed method can easily catch the specular reflection of yarns. In this paper, a novel method for recoloring a textile image with polarization observation is proposed. Remarkably, our method consistently achieves substantial improvements, i.e., around 10dB PSNR gain, upon the state-of-the-art methods. Through extensive experiments on various datasets and multiple settings, we validate the flexibility and effectiveness of our approach. ![]() The key ingredient of our method is a cross-camera alignment module that generates multi-scale correspondences for cross-domain image alignment. In contrast to the previous colorization methods, ours can adapt to color and monochrome cameras with distinctive spatial-temporal resolutions, rendering the flexibility and robustness in practical applications. Our method takes cross-domain and cross-scale images as input, and consequently synthesizes HR colorization results to facilitate the trade-off between spatial-temporal resolution and color depth in the single-camera imaging system. ![]() In this paper, we consider the color-plus-mono dual-camera system and propose an end-to-end convolutional neural network to align and fuse images from it in an efficient and cost-effective way. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |