
Recherche avancée
Autres articles (46)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Création définitive du canal
12 mars 2010, parLorsque votre demande est validée, vous pouvez alors procéder à la création proprement dite du canal. Chaque canal est un site à part entière placé sous votre responsabilité. Les administrateurs de la plateforme n’y ont aucun accès.
A la validation, vous recevez un email vous invitant donc à créer votre canal.
Pour ce faire il vous suffit de vous rendre à son adresse, dans notre exemple "http://votre_sous_domaine.mediaspip.net".
A ce moment là un mot de passe vous est demandé, il vous suffit d’y (...)
Sur d’autres sites (6348)
-
Video fingerprinting not working for the video
2 septembre 2024, par Veer PratapI implemented video fingerprinting, and it works well when the videos are exactly the same, producing a similarity score of 1. However, I encounter an issue when comparing videos that have been merged in different orders. For instance, if I merge video1 and video2 to create a new video and then merge them in reverse order (video2 + video1), the system fails to detect any matching frames between the two resulting videos.


The challenge lies in comparing frames between videos regardless of their timing or order. How can I modify the comparison process to ensure that frames are matched correctly even when the order of video segments changes ?


use ffmpeg_next::{codec, format, frame, media, packet, Error};
use sha2::{Digest, Sha256};
use std::collections::HashSet;

pub fn extract_frames(video_path: &str) -> Result>, Error> {
 ffmpeg_next::init()?;
 let mut ictx = format::input(&video_path)?;
 let input_stream_index = ictx
 .streams()
 .best(media::Type::Video)
 .ok_or(Error::StreamNotFound)?
 .index();
 let codec_params = ictx
 .stream(input_stream_index)
 .ok_or(Error::StreamNotFound)?
 .parameters();
 let codec = codec::Id::from(codec_params.id());
 let mut decoder = codec::Context::from_parameters(codec_params)?
 .decoder()
 .video()?;
 let mut frame = frame::Video::empty();
 let mut frames = Vec::new();
 let mut packet_count = 0;
 let mut frame_count = 0;

 for (stream, packet) in ictx.packets() {
 packet_count += 1;
 if stream.index() == input_stream_index {
 decoder.send_packet(&packet)?;
 while let Ok(()) = decoder.receive_frame(&mut frame) {
 let frame_data = frame.data(0).to_vec();
 frames.push(frame_data);
 frame_count += 1;
 eprintln!("Extracted frame {}", frame_count);
 }
 }
 }
 eprintln!(
 "Processed {} packets and extracted {} frames",
 packet_count, frame_count
 );
 Ok(frames)
}

pub fn hash_frame(frame: &[u8]) -> Vec<u8> {
 let mut hasher = Sha256::new();
 hasher.update(frame);
 hasher.finalize().to_vec()
}

/// Generates a vector of fingerprints for the given video frames.
///
/// This function takes a vector of frames (each represented as a vector of bytes)
/// and generates a fingerprint for each frame using the SHA-256 hash function.
///
/// # Arguments
///
/// * `frames` - A vector of video frames, where each frame is a `Vec<u8>` representing the frame's raw data.
///
/// # Returns
///
/// * `Vec>` - A vector of fingerprints, where each fingerprint is a `Vec<u8>` representing the SHA-256 hash of the corresponding frame.
pub fn generate_fingerprints(frames: Vec>) -> Vec> {
 frames.into_iter().map(|frame| hash_frame(&frame)).collect()
}

/// Compares two videos by extracting frames and generating fingerprints, then computing the similarity between the two sets of fingerprints.
///
/// This function extracts frames from the two provided video files, generates fingerprints for each frame,
/// and compares the fingerprints to determine the similarity between the two videos.
///
/// # Arguments
///
/// * `video_path1` - A string slice that holds the path to the first video file.
/// * `video_path2` - A string slice that holds the path to the second video file.
///
/// # Returns
///
/// * `Result>` - The similarity score between the two videos as a floating-point value (0.0 to 1.0).
/// Returns an error if there is an issue with extracting frames or generating fingerprints.
///
/// # Errors
///
/// This function will return an error if:
/// * There is an issue with opening or reading the video files.
/// * There is an issue with extracting frames from the video files.
/// * There is an issue with generating fingerprints from the frames.
pub fn compare_videos(
 video_path1: &str,
 video_path2: &str,
) -> Result> {
 println!("Comparing videos: {} and {}", video_path1, video_path2);
 let frames1 = extract_frames(video_path1)?;
 let frames2 = extract_frames(video_path2)?;

 let fingerprints1: HashSet<_> = generate_fingerprints(frames1).into_iter().collect();
 let fingerprints2: HashSet<_> = generate_fingerprints(frames2).into_iter().collect();

 println!("Number of fingerprints in video 1: {}", fingerprints1.len());
 println!("Number of fingerprints in video 2: {}", fingerprints2.len());

 if !fingerprints1.is_empty() && !fingerprints2.is_empty() {
 println!(
 "Sample fingerprint from video 1: {:?}",
 fingerprints1.iter().take(1).collect::>()
 );
 println!(
 "Sample fingerprint from video 2: {:?}",
 fingerprints2.iter().take(1).collect::>()
 );
 }

 // Calculate Jaccard similarity
 let intersection_size = fingerprints1.intersection(&fingerprints2).count();
 let union_size = fingerprints1.union(&fingerprints2).count();

 println!("Intersection size: {}", intersection_size);
 println!("Union size: {}", union_size);

 let similarity = if union_size == 0 {
 0.0
 } else {
 intersection_size as f64 / union_size as f64
 };

 println!("Similarity score: {}", similarity);

 Ok(similarity)
}

</u8></u8></u8>


-
Notes on Linux for Dreamcast
23 février 2011, par Multimedia Mike — Sega Dreamcast, VP8I wanted to write down some notes about compiling Linux on Dreamcast (which I have yet to follow through to success). But before I do, allow me to follow up on my last post where I got Google’s libvpx library decoding VP8 video on the DC. Remember when I said the graphics hardware could only process variations of RGB color formats ? I was mistaken. Reading over some old documentation, I noticed that the DC’s PowerVR hardware can also handle packed YUV textures (UYVY, specifically) :
The video looks pretty sharp in the small photo. Up close, less so, due to the low resolution and high quantization of the test vector combined with the naive chroma upscaling. For the curious, the grey box surrounding the image highlights the 256-square texture that the video frame gets plotted on. Texture dimensions have to be powers of 2.
Notes on Linux for Dreamcast
I’ve occasionally dabbled with Linux on my Dreamcast. There’s an ancient (circa 2001) distro based around a build of kernel 2.4.5 out there. But I wanted to try to get something more current compiled. Thus far, I have figured out how to cross compile kernels pretty handily but have been unsuccessful in making them run.Here are notes are the compilation portion :
- kernel.org provides a very useful set of cross compiling toolchains
- get the gcc 4.5.1 cross toolchain for SH-4 (the gcc 4.3.3 one won’t work because the binutils is too old ; it will fail to assemble certain instructions as described in this post)
- working off of Linux kernel 2.6.37, edit the top-level Makefile ; find the ARCH and CROSS_COMPILE variables and set appropriately :
ARCH ?= sh CROSS_COMPILE ?= /path/to/gcc-4.5.1-nolibc/sh4-linux/bin/sh4-linux-
$ make dreamcast_defconfig
$ make menuconfig
... if any changes to the default configuration are desired- manually edit arch/sh/Makefile, changing :
cflags-$(CONFIG_CPU_SH4) := $(call cc-option,-m4,) \ $(call cc-option,-mno-implicit-fp,-m4-nofpu)
to :
cflags-$(CONFIG_CPU_SH4) := $(call cc-option,-m4,) \ $(call cc-option,-mno-implicit-fp)
I.e., remove the
'-m4-nofpu'
option. According to the gcc man page, this will "Generate code for the SH4 without a floating-point unit." Why this is a default is a mystery since the DC’s SH-4 has an FPU and compilation fails when enabling this option. - On that note, I was always under the impression that the DC sported an SH-4 CPU with the model number SH7750. According to this LinuxSH wiki page as well as the Linux kernel help, it actually has an SH7091 variant. This photo of the physical DC hardware corroborates the model number.
$ make
... to build a Linux kernel for the Sega Dreamcast
Running
So I can compile the kernel but running the kernel (the resulting vmlinux ELF file) gives me trouble. The default kernel ELF file reports an entry point of 0x8c002000. Attempting to upload this through the serial uploading facility I have available to me triggers a system reset almost immediately, probably because that’s the same place that the bootloader calls home. I have attempted to alter the starting address via ’make menuconfig’ -> System type -> Memory management options -> Physical memory start address. This allows the upload to complete but it still does not run. It’s worth noting that the 2.4.5 vmlinux file from the old distribution can be executed when uploaded through the serial loader, and it begins at 0x8c210000. -
FFmpeg on android is crashing in avcodec_decode_video2 function
6 juin 2015, par Matt WolfeFFmpeg is crashing on : libavcodec/utils.c avcodec_decode_video2 around line 2400
ret = avctx->codec->decode(avctx, picture, got_picture_ptr, &tmp);
So I’ve compiled ffmpeg on android using the following configure script (based from here ) :
prefix=${src_root}/ffmpeg/android/arm
addi_cflags="-marm -Os -fpic"
addi_ldflags=""
./configure \
--prefix=${prefix} \
--target-os=linux \
--arch=arm \
--enable-shared \
--disable-doc \
--disable-programs \
--disable-symver \
--cross-prefix=${TOOLCHAIN}/bin/arm-linux-androideabi- \
--enable-cross-compile \
--enable-decoder=aac \
--enable-decoder=mpeg4 \
--enable-decoder=h263 \
--enable-decoder=flv \
--enable-decoder=mpegvideo \
--enable-decoder=mpeg2video \
--sysroot=${SYSROOT} \
--extra-cflags="${addi_cflags}" \
--pkg-config=$(which pkg-config) >> ${build_log} 2>&1 || die "Couldn't configure ffmpeg"The *.so files get copied over into my projects which I reference from my Android.mk script :
LOCAL_PATH := $(call my-dir)
FFMPEG_PATH=/path/to/android-ffmpeg-with-rtmp/build/dist
include $(CLEAR_VARS)
LOCAL_MODULE := libavcodec
LOCAL_SRC_FILES :=$(FFMPEG_PATH)/lib/libavcodec-56.so
include $(PREBUILT_SHARED_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE := libavdevice
LOCAL_SRC_FILES :=$(FFMPEG_PATH)/lib/libavdevice-56.so
include $(PREBUILT_SHARED_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE := libavfilter
LOCAL_SRC_FILES :=$(FFMPEG_PATH)/lib/libavfilter-5.so
include $(PREBUILT_SHARED_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE := libavformat
LOCAL_SRC_FILES :=$(FFMPEG_PATH)/lib/libavformat-56.so
include $(PREBUILT_SHARED_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE := libavutil
LOCAL_SRC_FILES :=$(FFMPEG_PATH)/lib/libavutil-54.so
include $(PREBUILT_SHARED_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE := libswresample
LOCAL_SRC_FILES :=$(FFMPEG_PATH)/lib/libswresample-1.so
include $(PREBUILT_SHARED_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE := libswscale
LOCAL_SRC_FILES :=$(FFMPEG_PATH)/lib/libswscale-3.so
include $(PREBUILT_SHARED_LIBRARY)
include $(CLEAR_VARS)
LOCAL_LDLIBS := -llog
LOCAL_C_INCLUDES := $(FFMPEG_PATH)/include
#LOCAL_PRELINK_MODULE := false
LOCAL_MODULE := axonffmpeg
LOCAL_SRC_FILES := libffmpeg.c
LOCAL_CFLAGS := -g
LOCAL_SHARED_LIBRARIES := libavcodec libavdevice libavfilter libavformat libavutil libswresample libswscale
include $(BUILD_SHARED_LIBRARY)I’m building a little wrapper to decode frames (mpeg4 video,part 2 simple profile) that come from an external camera :
#include
#include
#include <android></android>log.h>
#include <libavutil></libavutil>opt.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavutil></libavutil>channel_layout.h>
#include <libavutil></libavutil>common.h>
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>mathematics.h>
#include <libavutil></libavutil>samplefmt.h>
#define DEBUG_TAG "LibFFMpeg:NDK"
AVCodec *codec;
AVFrame *current_frame;
AVCodecContext *context;
int resWidth, resHeight, bitRate;
void my_log_callback(void *ptr, int level, const char *fmt, va_list vargs);
jint Java_com_mycompany_axonv2_LibFFMpeg_initDecoder(JNIEnv * env, jobject this,
jint _resWidth, jint _resHeight, jint _bitRate)
{
__android_log_print(ANDROID_LOG_DEBUG, DEBUG_TAG, "initDecoder called");
int len;
resWidth = _resWidth;
resHeight = _resHeight;
bitRate = _bitRate;
av_log_set_callback(my_log_callback);
av_log_set_level(AV_LOG_VERBOSE);
avcodec_register_all();
codec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);
if (!codec) {
__android_log_print(ANDROID_LOG_ERROR, DEBUG_TAG, "codec %d not found", AV_CODEC_ID_MPEG4);
return -1;
}
context = avcodec_alloc_context3(codec);
if (!context) {
__android_log_print(ANDROID_LOG_ERROR, DEBUG_TAG, "Could not allocate codec context");
return -1;
}
context->width = resWidth;
context->height = resHeight;
context->bit_rate = bitRate;
context->pix_fmt = AV_PIX_FMT_YUV420P;
context->time_base.den = 6;
context->time_base.num = 1;
int openRet = avcodec_open2(context, codec, NULL);
if (openRet < 0) {
__android_log_print(ANDROID_LOG_ERROR, DEBUG_TAG, "Could not open codec, error:%d", openRet);
return -1;
}
current_frame = av_frame_alloc();
if (!current_frame) {
__android_log_print(ANDROID_LOG_ERROR, DEBUG_TAG, "Could not allocate video frame");
return -1;
}
return 0;
}
void my_log_callback(void *ptr, int level, const char *fmt, va_list vargs) {
__android_log_print (level, DEBUG_TAG, fmt, vargs);
}
jint Java_com_mycompany_axonv2_LibFFMpeg_queueFrameForDecoding(JNIEnv * env, jobject this,
jlong pts, jbyteArray jBuffer)
{
__android_log_print(ANDROID_LOG_DEBUG, DEBUG_TAG, "queueFrameForDecoding called");
AVPacket avpkt;
av_init_packet(&avpkt);
int buffer_len = (*env)->GetArrayLength(env, jBuffer);
uint8_t* buffer = (uint8_t *) (*env)->GetByteArrayElements(env, jBuffer,0);
int got_frame = 0;
__android_log_print(ANDROID_LOG_DEBUG, DEBUG_TAG, "copied %d bytes into uint8_t* buffer", buffer_len);
av_packet_from_data(&avpkt, buffer, buffer_len);
__android_log_print(ANDROID_LOG_DEBUG, DEBUG_TAG, "av_packet_from_data called");
avpkt.pts = pts;
int ret = avcodec_decode_video2(context, current_frame, &got_frame, &avpkt);
__android_log_print(ANDROID_LOG_DEBUG, DEBUG_TAG, "avcodec_decode_video2 returned %d" , ret);
(*env)->ReleaseByteArrayElements(env, jBuffer, (jbyte*) buffer, 0);
__android_log_print(ANDROID_LOG_DEBUG, DEBUG_TAG, "ReleaseByteArrayElements()");
return 0;
}Alright so the init function above works fine and the queueFrameForDecoding works up until the avcodec_decode_video2 function. I’m not expecting it to work just quite yet however as I’ve been logging output as to where we get in that function, I’ve found that there is a call (in avutil.c) :
(around line 2400 in the latest code)avcodec_decode_video2(...) {
....
ret = avctx->codec->decode(avctx, picture, got_picture_ptr, &tmp);init runs fine and finds the codec and all that. Everything works great up until the avcodec_decode_video2 call :
*** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
Build fingerprint: 'samsung/klteuc/klteatt:4.4.2/KOT49H/G900AUCU2ANG3:user/release-keys'
Revision: '14'
pid: 19355, tid: 22584, name: BluetoothReadTh >>> com.mycompany.axonv2 <<<
signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 00000000
r0 79308400 r1 79491710 r2 7b0b4a70 r3 7b0b49e8
r4 79308400 r5 79491710 r6 00000000 r7 7b0b49e8
r8 7b0b4a70 r9 7b0b4a80 sl 795106d8 fp 00000000
ip 00000000 sp 7b0b49b8 lr 7ba05c18 pc 00000000 cpsr 600f0010
d0 206c616768616c62 d1 6564206365646f63
d2 756f722065646f63 d3 20736920656e6974
d4 0b0a01000a0a0a0b d5 0a630a01000a0a0a
d6 0a630a011a00f80a d7 0b130a011a00f90a
d8 0000000000000000 d9 0000000000000000
d10 0000000000000000 d11 0000000000000000
d12 0000000000000000 d13 0000000000000000
d14 0000000000000000 d15 0000000000000000
d16 6369705f746f6720 d17 7274705f65727574
d18 8000000000000000 d19 00000b9e42bd5730
d20 0000000000000000 d21 0000000000000000
d22 7b4fd10400000000 d23 773b894877483b68
d24 0000000000000000 d25 3fc2f112df3e5244
d26 40026bb1bbb55516 d27 0000000000000000
d28 0000000000000000 d29 0000000000000000
d30 0000000000000000 d31 0000000000000000
scr 60000010
backtrace:
#00 pc 00000000 <unknown>
#01 pc 00635c14 /data/app-lib/com.mycompany.axonv2-6/libavcodec-56.so (avcodec_decode_video2+1128)
</unknown>I don’t understand why it’s crashing when trying to call the decode function. I’ve looked into the codec function pointer list and this should be calling ff_h263_decode_frame (source, libavcodec/mpeg4videodec.c) :
AVCodec ff_mpeg4_decoder = {
.name = "mpeg4",
.long_name = NULL_IF_CONFIG_SMALL("MPEG-4 part 2"),
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_MPEG4,
.priv_data_size = sizeof(Mpeg4DecContext),
.init = decode_init,
.close = ff_h263_decode_end,
.decode = ff_h263_decode_frame,
.capabilities = CODEC_CAP_DRAW_HORIZ_BAND | CODEC_CAP_DR1 |
CODEC_CAP_TRUNCATED | CODEC_CAP_DELAY |
CODEC_CAP_FRAME_THREADS,
.flush = ff_mpeg_flush,
.max_lowres = 3,
.pix_fmts = ff_h263_hwaccel_pixfmt_list_420,
.profiles = NULL_IF_CONFIG_SMALL(mpeg4_video_profiles),
.update_thread_context = ONLY_IF_THREADS_ENABLED(mpeg4_update_thread_context),
.priv_class = &mpeg4_class,
};I know that the ff_h263_decode_frame function isn’t being called because I added logging to it and none of that gets printed.
However, if I just call ff_h263_decode_frame directly from avcodec_decode_video2 then my logging gets output. I don’t want to call this function directly though and would rather get the ffmpeg framework working correctly. Is there something wrong with how I’ve configured ffmpeg ? I have added mpegvideo, mpeg2video, flv, h263, to the configure script but none have them have helped (they should be included automatically by —enable-decoder=mpeg4).Any help would be greatly appreciated.