StylePuncher: Encoding a Hidden QR Code into Images

Farhad Shadmand,Luiz Schirmer,Nuno Gonçalves

Published 2025 in International Conference on Pattern Recognition Applications and Methods

ABSTRACT

: Recent advancements in steganography and deep learning have enabled the creation of security methods for imperceptible embedding of data within images. However, many of these methods require substantial time and memory during the training and testing phases. This paper introduces a lighter steganography (also applicable to watermarking purposes) approach, StylePuncher, designed for encoding and decoding 2D binary secret messages within images. The proposed network combines an encoder utilizing neural style transfer techniques with a decoder based on an image-to-image transfer network, offering an efficient and robust so-lution. The encoder takes a (512 × 512 × 3) image along with a high-capacity 2D binary message containing 4096 bits (e.g., a QR code or a simple grayscale logo) and ”punches” the message into the cover image. The decoder, trained using multiple weighted loss functions and noise perturbations, then recovers the embedded message. In addition to demonstrating the success of StylePuncher, this paper provides a detailed analysis of the model’s robustness when exposed to various noise perturbations. Despite its lightweight and fast architecture, StylePuncher achieved a notably high decoding accuracy under noisy conditions, outperforming several state-of-the-art steganography models.

PUBLICATION RECORD

  • Publication year

    2025

  • Venue

    International Conference on Pattern Recognition Applications and Methods

  • Publication date

    Unknown publication date

  • Fields of study

    Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-33 of 33 references · Page 1 of 1

CITED BY

  • No citing papers are available for this paper.

Showing 0-0 of 0 citing papers · Page 1 of 1