
Recherche avancée
Autres articles (37)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
L’utiliser, en parler, le critiquer
10 avril 2011La première attitude à adopter est d’en parler, soit directement avec les personnes impliquées dans son développement, soit autour de vous pour convaincre de nouvelles personnes à l’utiliser.
Plus la communauté sera nombreuse et plus les évolutions seront rapides ...
Une liste de discussion est disponible pour tout échange entre utilisateurs.
Sur d’autres sites (6727)
-
I'm trying to hide information in a H264 video. When I stitch the video up, split it into frames again and try to read it, the information is lost
18 mai 2024, par Wer WerI'm trying to create a video steganography python script. The algorithm for hiding will be...


- 

- convert any video codec into h264 lossless
- save the audio of the video and split the h264 video into frames
- hide my txt secret into frame0 using LSB replacement method
- stitch the video back up and put in the audio










...and when I want to recover the text, I'll


- 

- save the audio of the video and split the encoded h264 video into frames
- retrieve my hidden text from frame0 and print the text






So, this is what I can do :


- 

- split the video
- hide the text in frame0
- retrieve the text from frame0
- stitch the video










But after stitching the video, when I tried to retrieve the text by splitting that encrypted video, it appears that the text has been lost. This is because i got the error


UnicodeEncodeError: 'charmap' codec can't encode character '\x82' in position 21: character maps to <undefined>
</undefined>


I'm not sure if my LSB replacement algorithm was lost, which results in my not being able to retrieve my frame 0 information, or if the H264 conversion command I used was a converted my video into H264 lossy version instead of lossless (which I don't believe so because I specified -qp 0)
This was the command I used to convert my video


ffmpeg -i video.mp4 -t 12 -c:v libx264 -preset veryslow -qp 0 output.mp4



These are my codes


import json
import os
import magic
import ffmpeg
import cv2
import numpy as np

import subprocess

# Path to the file you want to check
here = os.path.dirname(os.path.abspath(__file__))
file_path = os.path.join(here, "output.mp4")
raw_video = cv2.VideoCapture(file_path)
audio_output_path = os.path.join(here, "audio.aac")
final_video_file = os.path.join(here, "output.mp4")

# create a folder to save the frames.
frames_directory = os.path.join(here, "data1")
try:
 if not os.path.exists(frames_directory):
 os.makedirs(frames_directory)
except OSError:
 print("Error: Creating directory of data")

file_path_txt = os.path.join(here, "hiddentext.txt")
# Read the content of the file in binary mode
with open(file_path_txt, "r") as f:
 file_content = f.read()
# txt_binary_representation = "".join(format(byte, "08b") for byte in file_content)
# print(file_content)

"""
use this cmd to convert any video to h264 lossless. original vid in 10 bit depth format
ffmpeg -i video.mp4 -c:v libx264 -preset veryslow -qp 0 output.mp4

use this cmd to convert any video to h264 lossless. original vid in 8 bit depth format
ffmpeg -i video.mp4 -c:v libx264 -preset veryslow -crf 0 output.mp4

i used this command to only get first 12 sec of video because the h264 vid is too large 
ffmpeg -i video.mp4 -t 12 -c:v libx264 -preset veryslow -qp 0 output.mp4

check for multiple values to ensure its h264 lossless:
1. CRF = 0
2. qp = 0
3. High 4:4:4 Predictive
"""


# region --codec checking. ensure video is h264 lossless--
def check_h264_lossless(file_path):
 try:
 # Use ffprobe to get detailed codec information, including tags
 result = subprocess.run(
 [
 "ffprobe",
 "-v",
 "error",
 "-show_entries",
 "stream=codec_name,codec_long_name,profile,level,bit_rate,avg_frame_rate,nb_frames,tags",
 "-of",
 "json",
 file_path,
 ],
 stdout=subprocess.PIPE,
 stderr=subprocess.PIPE,
 text=True,
 )
 # Check if the file is lossless
 metadata = check_h264_lossless(file_path)
 print(json.dumps(metadata, indent=4))

 # Check if the CRF value is available in the tags
 for stream in metadata.get("streams", []):
 if stream.get("codec_name") == "h264":
 tags = stream.get("tags", {})
 crf_value = tags.get("crf")
 encoder = tags.get("encoder")
 print(f"CRF value: {crf_value}")
 print(f"Encoder: {encoder}")
 return json.loads(result.stdout)
 except Exception as e:
 return f"An error occurred: {e}"


# endregion


# region --splitting video into frames--
def extract_audio(input_video_path, audio_output_path):
 if os.path.exists(audio_output_path):
 print(f"Audio file {audio_output_path} already exists. Skipping extraction.")
 return
 command = [
 "ffmpeg",
 "-i",
 input_video_path,
 "-q:a",
 "0",
 "-map",
 "a",
 audio_output_path,
 ]
 try:
 subprocess.run(command, check=True)
 print(f"Audio successfully extracted to {audio_output_path}")
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")


def split_into_frames():
 extract_audio(file_path, audio_output_path)
 currentframe = 0
 print("Splitting...")
 while True:
 ret, frame = raw_video.read()
 if ret:
 name = os.path.join(here, "data1", f"frame{currentframe}.png")
 # print("Creating..." + name)
 cv2.imwrite(name, frame)
 currentframe += 1
 else:
 print("Complete")
 break


# endregion


# region --merge all back into h264 lossless--
# output_video_file = "output1111.mp4"


def stitch_frames_to_video(frames_dir, output_video_path, framerate=60):
 command = [
 "ffmpeg",
 "-y",
 "-framerate",
 str(framerate),
 "-i",
 os.path.join(frames_dir, "frame%d.png"),
 "-c:v",
 "libx264",
 "-preset",
 "veryslow",
 "-qp",
 "0",
 output_video_path,
 ]

 try:
 subprocess.run(command, check=True)
 print(f"Video successfully created at {output_video_path}")
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")


def add_audio_to_video(video_path, audio_path, final_output_path):
 command = [
 "ffmpeg",
 "-i",
 video_path,
 "-i",
 audio_path,
 "-c:v",
 "copy",
 "-c:a",
 "aac",
 "-strict",
 "experimental",
 final_output_path,
 ]
 try:
 subprocess.run(command, check=True)
 print(f"Final video with audio created at {final_output_path}")
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")


# endregion


def to_bin(data):
 if isinstance(data, str):
 return "".join([format(ord(i), "08b") for i in data])
 elif isinstance(data, bytes) or isinstance(data, np.ndarray):
 return [format(i, "08b") for i in data]
 elif isinstance(data, int) or isinstance(data, np.uint8):
 return format(data, "08b")
 else:
 raise TypeError("Type not supported")


def encode(image_name, secret_data):
 image = cv2.imread(image_name)
 n_bytes = image.shape[0] * image.shape[1] * 3 // 8
 print("[*] Maximum bytes to encode:", n_bytes)
 secret_data += "====="
 if len(secret_data) > n_bytes:
 raise ValueError("[!] Insufficient bytes, need bigger image or less data")
 print("[*] Encoding Data")

 data_index = 0
 binary_secret_data = to_bin(secret_data)
 data_len = len(binary_secret_data)
 for row in image:
 for pixel in row:
 r, g, b = to_bin(pixel)
 if data_index < data_len:
 pixel[0] = int(r[:-1] + binary_secret_data[data_index], 2)
 data_index += 1
 if data_index < data_len:
 pixel[1] = int(g[:-1] + binary_secret_data[data_index], 2)
 data_index += 1
 if data_index < data_len:
 pixel[2] = int(b[:-1] + binary_secret_data[data_index], 2)
 data_index += 1
 if data_index >= data_len:
 break
 return image


def decode(image_name):
 print("[+] Decoding")
 image = cv2.imread(image_name)
 binary_data = ""
 for row in image:
 for pixel in row:
 r, g, b = to_bin(pixel)
 binary_data += r[-1]
 binary_data += g[-1]
 binary_data += b[-1]
 all_bytes = [binary_data[i : i + 8] for i in range(0, len(binary_data), 8)]
 decoded_data = ""
 for byte in all_bytes:
 decoded_data += chr(int(byte, 2))
 if decoded_data[-5:] == "=====":
 break
 return decoded_data[:-5]


frame0_path = os.path.join(here, "data1", "frame0.png")
encoded_image_path = os.path.join(here, "data1", "frame0.png")


def encoding_function():
 split_into_frames()

 encoded_image = encode(frame0_path, file_content)
 cv2.imwrite(encoded_image_path, encoded_image)

 stitch_frames_to_video(frames_directory, file_path)
 add_audio_to_video(file_path, audio_output_path, final_video_file)


def decoding_function():
 split_into_frames()
 decoded_message = decode(encoded_image_path)
 print(f"[+] Decoded message: {decoded_message}")


# encoding_function()
decoding_function()




So I tried to put my decoding function into my encoding function like this


def encoding_function():
 split_into_frames()

 encoded_image = encode(frame0_path, file_content)
 cv2.imwrite(encoded_image_path, encoded_image)

#immediately get frame0 and decode without stitching to check if the data is there
 decoded_message = decode(encoded_image_path)
 print(f"[+] Decoded message: {decoded_message}")

 stitch_frames_to_video(frames_directory, file_path)
 add_audio_to_video(file_path, audio_output_path, final_video_file)




This returns my secret text from frame0. But splitting it after stitching does not return my hidden text. The hidden text was lost


def decoding_function():
 split_into_frames()
#this function is after the encoding_function(). the secret text is lost, resulting in charmap codec #can't encode error
 decoded_message = decode(encoded_image_path)
 print(f"[+] Decoded message: {decoded_message}")



EDIT :
So i ran the encoding function first, copied frame0.png out and placed it some where. Then I ran the decoding function, and got another frame0.png.


I ran both frame0.png into this python function


frame0_data1_path = os.path.join(here, "data1", "frame0.png")
frame0_data2_path = os.path.join(here, "data2", "frame0.png")
frame0_data1 = cv2.imread(frame0_data1_path)
frame0_data2 = cv2.imread(frame0_data2_path)

if frame0_data1 is None:
 print(f"Error: Could not load image from {frame0_data1_path}")
elif frame0_data2 is None:
 print(f"Error: Could not load image from {frame0_data2_path}")
else:

 if np.array_equal(frame0_data1, frame0_data2):
 print("The frames are identical.")
 else:
 print("The frames are different.")



...and apparently both are different. This means my frame0 binary got changed when I stitch back into the video after encoding. Is there a way to make it not change ? Or will h264 or any video codec change a little bit when you stitch the frames back up ?


-
avcodec/libdav1d : Don't cast const away unnecessarily
3 avril 2024, par Andreas Rheinhardt -
avcodec/tiff : Don't cast const away via bsearch
31 mars 2024, par Andreas Rheinhardt