
Recherche avancée
Autres articles (79)
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)
Sur d’autres sites (7621)
-
I have an application in flask where it streams a camera using ffmpeg, the problem is that I can not display the video from the camera using the GPU [closed]
17 janvier, par RubenI'll put you in context, I am using flask (python) to display a camera in the browser to stream it, for this I use the following Python code :


command = [
 'ffmpeg',
 '-loglevel', 'warning',
 '-rtsp_transport', 'tcp',
 '-i', self.config['url'],
 '-map', '0:v:0', # fuerzo que solo procese el video
 '-vf', f'fps={self.config["fps"]},scale=640:360:force_original_aspect_ratio=decrease',
 '-c:v', 'h264_nvenc', # especificamos que queremos tirar de la gpu de nvidia
 '-preset', 'p7', # ajusta para la maxima calidad/velocidad (p1 mas rapida pero peor calidad - p7 más lento pero mejor calidad)
 '-qp', self.config['quality'], # control de calidad del codificador (0 [mejor calidad] - 51 [peor calidad])
 '-pix_fmt', 'yuv444p', # se mete explicitamente el formato de pixeles
 '-color_range', 'pc',
 '-an', # desactiva el audio
 '-f', 'image2pipe',
 'pipe:1'
] 

self.process = subprocess.Popen(
 command,
 stdout=subprocess.PIPE,
 stderr=subprocess.PIPE,
 bufsize=10**8
)



The problem is that it does not display the video streaming, but it connects correctly to the camera.


On the other hand, It show me the following warnings, which may have something to do with the display, it's probably the second warning that has to do with the pixel format :


DEBUG :main:FFmpeg [camera1] : Guessed Channel Layout for Input Stream #0.1 : mono
DEBUG :main:FFmpeg [camera1] : [swscaler @ 0x560f70b78680] deprecated pixel format used, make sure you did set range correctly


The server has different encodes installed :


DEV.LS h264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (decoders : h264 h264_v4l2m2m h264_qsv h264_cuvid ) (encoders : libx264 libx264rgb h264_nvenc h264_omx h264_qsv h264_v4l2m2m h264_vaapi nvenc nvenc_h264 )


Y uso el h264_nvenc, tambien el servidor tiene soporte de aceleración de hardware con :


libavutil 56. 70.100 / 56. 70.100
libavcodec 58.134.100 / 58.134.100
libavformat 58. 76.100 / 58. 76.100
libavdevice 58. 13.100 / 58. 13.100
libavfilter 7.110.100 / 7.110.100
libswscale 5. 9.100 / 5. 9.100
libswresample 3. 9.100 / 3. 9.100
libpostproc 55. 9.100 / 55. 9.100
Hardware acceleration methods :
vdpau
cuda
vaapi
qsv
drm
opencl


Between them h264_nvenc uses cuda


I expand a little on the information it provides me when using h264_nvenc :


Encoder h264_nvenc [NVIDIA NVENC H.264 encoder]:
 General capabilities: dr1 delay hardware
 Threading capabilities: none
 Supported hardware devices: cuda cuda
 Supported pixel formats: yuv420p nv12 p010le yuv444p p016le yuv444p16le bgr0 rgb0 cuda
h264_nvenc AVOptions:
 -preset <int> E..V....... Set the encoding preset (from 0 to 18) (default p4)
 default 0 E..V.......
 slow 1 E..V....... hq 2 passes
 medium 2 E..V....... hq 1 pass
 fast 3 E..V....... hp 1 pass
 hp 4 E..V.......
 hq 5 E..V.......
 bd 6 E..V.......
 ll 7 E..V....... low latency
 llhq 8 E..V....... low latency hq
 llhp 9 E..V....... low latency hp
 lossless 10 E..V.......
 losslesshp 11 E..V.......
 p1 12 E..V....... fastest (lowest quality)
 p2 13 E..V....... faster (lower quality)
 p3 14 E..V....... fast (low quality)
 p4 15 E..V....... medium (default)
 p5 16 E..V....... slow (good quality)
 p6 17 E..V....... slower (better quality)
 p7 18 E..V....... slowest (best quality)
 -tune <int> E..V....... Set the encoding tuning info (from 1 to 4) (default hq)
 hq 1 E..V....... High quality
 ll 2 E..V....... Low latency
 ull 3 E..V....... Ultra low latency
 lossless 4 E..V....... Lossless
 -profile <int> E..V....... Set the encoding profile (from 0 to 3) (default main)
 baseline 0 E..V.......
 main 1 E..V.......
 high 2 E..V.......
 high444p 3 E..V.......
 -level <int> E..V....... Set the encoding level restriction (from 0 to 62) (default auto)
 auto 0 E..V.......
 1 10 E..V.......
 1.0 10 E..V.......
 1b 9 E..V.......
 1.0b 9 E..V.......
 1.1 11 E..V.......
 1.2 12 E..V.......
 1.3 13 E..V.......
 2 20 E..V.......
 2.0 20 E..V.......
 2.1 21 E..V.......
 2.2 22 E..V.......
 3 30 E..V.......
 3.0 30 E..V.......
 3.1 31 E..V.......
 3.2 32 E..V.......
 4 40 E..V.......
 4.0 40 E..V.......
 4.1 41 E..V.......
 4.2 42 E..V.......
 5 50 E..V.......
 5.0 50 E..V.......
 5.1 51 E..V.......
 5.2 52 E..V.......
 6.0 60 E..V.......
 6.1 61 E..V.......
 6.2 62 E..V.......
 -rc <int> E..V....... Override the preset rate-control (from -1 to INT_MAX) (default -1)
 constqp 0 E..V....... Constant QP mode
 vbr 1 E..V....... Variable bitrate mode
 cbr 2 E..V....... Constant bitrate mode
 vbr_minqp 8388612 E..V....... Variable bitrate mode with MinQP (deprecated)
 ll_2pass_quality 8388616 E..V....... Multi-pass optimized for image quality (deprecated)
 ll_2pass_size 8388624 E..V....... Multi-pass optimized for constant frame size (deprecated)
 vbr_2pass 8388640 E..V....... Multi-pass variable bitrate mode (deprecated)
 cbr_ld_hq 8388616 E..V....... Constant bitrate low delay high quality mode
 cbr_hq 8388624 E..V....... Constant bitrate high quality mode
 vbr_hq 8388640 E..V....... Variable bitrate high quality mode
 -rc-lookahead <int> E..V....... Number of frames to look ahead for rate-control (from 0 to INT_MAX) (default 0)
 -surfaces <int> E..V....... Number of concurrent surfaces (from 0 to 64) (default 0)
 -cbr <boolean> E..V....... Use cbr encoding mode (default false)
 -2pass <boolean> E..V....... Use 2pass encoding mode (default auto)
 -gpu <int> E..V....... Selects which NVENC capable GPU to use. First GPU is 0, second is 1, and so on. (from -2 to INT_MAX) (default any)
 any -1 E..V....... Pick the first device available
 list -2 E..V....... List the available devices
 -delay <int> E..V....... Delay frame output by the given amount of frames (from 0 to INT_MAX) (default INT_MAX)
 -no-scenecut <boolean> E..V....... When lookahead is enabled, set this to 1 to disable adaptive I-frame insertion at scene cuts (default false)
 -forced-idr <boolean> E..V....... If forcing keyframes, force them as IDR frames. (default false)
 -b_adapt <boolean> E..V....... When lookahead is enabled, set this to 0 to disable adaptive B-frame decision (default true)
 -spatial-aq <boolean> E..V....... set to 1 to enable Spatial AQ (default false)
 -spatial_aq <boolean> E..V....... set to 1 to enable Spatial AQ (default false)
 -temporal-aq <boolean> E..V....... set to 1 to enable Temporal AQ (default false)
 -temporal_aq <boolean> E..V....... set to 1 to enable Temporal AQ (default false)
 -zerolatency <boolean> E..V....... Set 1 to indicate zero latency operation (no reordering delay) (default false)
 -nonref_p <boolean> E..V....... Set this to 1 to enable automatic insertion of non-reference P-frames (default false)
 -strict_gop <boolean> E..V....... Set 1 to minimize GOP-to-GOP rate fluctuations (default false)
 -aq-strength <int> E..V....... When Spatial AQ is enabled, this field is used to specify AQ strength. AQ strength scale is from 1 (low) - 15 (aggressive) (from 1 to 15) (default 8)
 -cq <float> E..V....... Set target quality level (0 to 51, 0 means automatic) for constant quality mode in VBR rate control (from 0 to 51) (default 0)
 -aud <boolean> E..V....... Use access unit delimiters (default false)
 -bluray-compat <boolean> E..V....... Bluray compatibility workarounds (default false)
 -init_qpP <int> E..V....... Initial QP value for P frame (from -1 to 51) (default -1)
 -init_qpB <int> E..V....... Initial QP value for B frame (from -1 to 51) (default -1)
 -init_qpI <int> E..V....... Initial QP value for I frame (from -1 to 51) (default -1)
 -qp <int> E..V....... Constant quantization parameter rate control method (from -1 to 51) (default -1)
 -weighted_pred <int> E..V....... Set 1 to enable weighted prediction (from 0 to 1) (default 0)
 -coder <int> E..V....... Coder type (from -1 to 2) (default default)
 default -1 E..V.......
 auto 0 E..V.......
 cabac 1 E..V.......
 cavlc 2 E..V.......
 ac 1 E..V.......
 vlc 2 E..V.......
 -b_ref_mode <int> E..V....... Use B frames as references (from 0 to 2) (default disabled)
 disabled 0 E..V....... B frames will not be used for reference
 each 1 E..V....... Each B frame will be used for reference
 middle 2 E..V....... Only (number of B frames)/2 will be used for reference
 -a53cc <boolean> E..V....... Use A53 Closed Captions (if available) (default true)
 -dpb_size <int> E..V....... Specifies the DPB size used for encoding (0 means automatic) (from 0 to INT_MAX) (default 0)
 -multipass <int> E..V....... Set the multipass encoding (from 0 to 2) (default disabled)
 disabled 0 E..V....... Single Pass
 qres 1 E..V....... Two Pass encoding is enabled where first Pass is quarter resolution
 fullres 2 E..V....... Two Pass encoding is enabled where first Pass is full resolution
 -ldkfs <int> E..V....... Low delay key frame scale; Specifies the Scene Change frame size increase allowed in case of single frame VBV and CBR (from 0 to 255) (default 0)
</int></int></int></boolean></int></int></int></int></int></int></int></boolean></boolean></float></int></boolean></boolean></boolean></boolean></boolean></boolean></boolean></boolean></boolean></boolean></int></int></boolean></boolean></int></int></int></int></int></int></int>


If anyone has some idea or needs more information to help me, I would appreciate it.


-
Can not add tmcd stream using libavcodec to replicate behavior of ffmpeg -timecode option
2 août, par Sailor JerryI'm trying to replicate option of command line ffmpeg -timecode in my C/C++ code. For some reasons the tcmd stream is not written to the output file. However the av_dump_format shows it in run time


Here is my minimal test


#include <iostream>
extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>avutil.h>
#include <libswscale></libswscale>swscale.h>
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>samplefmt.h>
}
bool checkProResAvailability() {
 const AVCodec* codec = avcodec_find_encoder_by_name("prores_ks");
 if (!codec) {
 std::cerr << "ProRes codec not available. Please install FFmpeg with ProRes support." << std::endl;
 return false;
 }
 return true;
}

int main(){
 av_log_set_level(AV_LOG_INFO);

 const char* outputFileName = "test_tmcd.mov";
 AVFormatContext* formatContext = nullptr;
 AVCodecContext* videoCodecContext = nullptr;

 if (!checkProResAvailability()) {
 return -1;
 }

 std::cout << "Creating test file with tmcd stream: " << outputFileName << std::endl;

 // Allocate the output format context
 if (avformat_alloc_output_context2(&formatContext, nullptr, "mov", outputFileName) < 0) {
 std::cerr << "Failed to allocate output context!" << std::endl;
 return -1;
 }

 if (avio_open(&formatContext->pb, outputFileName, AVIO_FLAG_WRITE) < 0) {
 std::cerr << "Failed to open output file!" << std::endl;
 avformat_free_context(formatContext);
 return -1;
 }

 // Find ProRes encoder
 const AVCodec* videoCodec = avcodec_find_encoder_by_name("prores_ks");
 if (!videoCodec) {
 std::cerr << "Failed to find the ProRes encoder!" << std::endl;
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 // Video stream setup
 AVStream* videoStream = avformat_new_stream(formatContext, nullptr);
 if (!videoStream) {
 std::cerr << "Failed to create video stream!" << std::endl;
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 videoCodecContext = avcodec_alloc_context3(videoCodec);
 if (!videoCodecContext) {
 std::cerr << "Failed to allocate video codec context!" << std::endl;
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 videoCodecContext->width = 1920;
 videoCodecContext->height = 1080;
 videoCodecContext->pix_fmt = AV_PIX_FMT_YUV422P10;
 videoCodecContext->time_base = (AVRational){1, 30}; // Set FPS: 30
 videoCodecContext->bit_rate = 2000000;

 if (avcodec_open2(videoCodecContext, videoCodec, nullptr) < 0) {
 std::cerr << "Failed to open ProRes codec!" << std::endl;
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 if (avcodec_parameters_from_context(videoStream->codecpar, videoCodecContext) < 0) {
 std::cerr << "Failed to copy codec parameters to video stream!" << std::endl;
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 videoStream->time_base = videoCodecContext->time_base;

 // Timecode stream setup
 AVStream* timecodeStream = avformat_new_stream(formatContext, nullptr);
 if (!timecodeStream) {
 std::cerr << "Failed to create timecode stream!" << std::endl;
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 timecodeStream->codecpar->codec_type = AVMEDIA_TYPE_DATA;
 timecodeStream->codecpar->codec_id = AV_CODEC_ID_TIMED_ID3;
 timecodeStream->codecpar->codec_tag = MKTAG('t', 'm', 'c', 'd'); // Timecode tag
 timecodeStream->time_base = (AVRational){1, 30}; // FPS: 30

 if (av_dict_set(&timecodeStream->metadata, "timecode", "00:00:30:00", 0) < 0) {
 std::cerr << "Failed to set timecode metadata!" << std::endl;
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 // Write container header
 if (avformat_write_header(formatContext, nullptr) < 0) {
 std::cerr << "Failed to write file header!" << std::endl;
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 // Encode a dummy video frame
 AVFrame* frame = av_frame_alloc();
 if (!frame) {
 std::cerr << "Failed to allocate video frame!" << std::endl;
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 frame->format = videoCodecContext->pix_fmt;
 frame->width = videoCodecContext->width;
 frame->height = videoCodecContext->height;

 if (av_image_alloc(frame->data, frame->linesize, frame->width, frame->height, videoCodecContext->pix_fmt, 32) < 0) {
 std::cerr << "Failed to allocate frame buffer!" << std::endl;
 av_frame_free(&frame);
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 // Fill frame with black
 memset(frame->data[0], 0, frame->linesize[0] * frame->height); // Y plane
 memset(frame->data[1], 128, frame->linesize[1] * frame->height / 2); // U plane
 memset(frame->data[2], 128, frame->linesize[2] * frame->height / 2); // V plane

 // Encode the frame
 AVPacket packet;
 av_init_packet(&packet);
 packet.data = nullptr;
 packet.size = 0;

 if (avcodec_send_frame(videoCodecContext, frame) == 0) {
 if (avcodec_receive_packet(videoCodecContext, &packet) == 0) {
 packet.stream_index = videoStream->index;
 av_interleaved_write_frame(formatContext, &packet);
 av_packet_unref(&packet);
 }
 }

 av_frame_free(&frame);

 // Write a dummy packet for the timecode stream
 AVPacket tmcdPacket;
 av_init_packet(&tmcdPacket);
 tmcdPacket.stream_index = timecodeStream->index;
 tmcdPacket.flags |= AV_PKT_FLAG_KEY;
 tmcdPacket.data = nullptr; // Empty packet for timecode
 tmcdPacket.size = 0;
 tmcdPacket.pts = 0; // Set necessary PTS
 tmcdPacket.dts = 0;
 av_interleaved_write_frame(formatContext, &tmcdPacket);

 // Write trailer
 if (av_write_trailer(formatContext) < 0) {
 std::cerr << "Failed to write file trailer!" << std::endl;
 }

 av_dump_format(formatContext, 0, "test.mov", 1);

 // Cleanup
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);

 std::cout << "Test file with timecode created successfully: " << outputFileName << std::endl;

 return 0;
}
</iostream>


The code output is :


Creating test file with tmcd stream: test_tmcd.mov
[prores_ks @ 0x11ce05790] Autoselected HQ profile to keep best quality. It can be overridden through -profile option.
[mov @ 0x11ce04f20] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[mov @ 0x11ce04f20] Encoder did not produce proper pts, making some up.
Output #0, mov, to 'test.mov':
 Metadata:
 encoder : Lavf61.7.100
 Stream #0:0: Video: prores (HQ) (apch / 0x68637061), yuv422p10le, 1920x1080, q=2-31, 2000 kb/s, 15360 tbn
 Stream #0:1: Data: timed_id3 (tmcd / 0x64636D74)
 Metadata:
 timecode : 00:00:30:00
Test file with timecode created successfully: test_tmcd.mov



The ffprobe output is :


$ ffprobe test_tmcd.mov
ffprobe version 7.1.1 Copyright (c) 2007-2025 the FFmpeg developers
 built with Apple clang version 16.0.0 (clang-1600.0.26.6)
 configuration: --prefix=/opt/homebrew/Cellar/ffmpeg/7.1.1_3 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags='-Wl,-ld_classic' --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libaribb24 --enable-libbluray --enable-libdav1d --enable-libharfbuzz --enable-libjxl --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox --enable-audiotoolbox --enable-neon
 libavutil 59. 39.100 / 59. 39.100
 libavcodec 61. 19.101 / 61. 19.101
 libavformat 61. 7.100 / 61. 7.100
 libavdevice 61. 3.100 / 61. 3.100
 libavfilter 10. 4.100 / 10. 4.100
 libswscale 8. 3.100 / 8. 3.100
 libswresample 5. 3.100 / 5. 3.100
 libpostproc 58. 3.100 / 58. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test_tmcd.mov':
 Metadata:
 major_brand : qt 
 minor_version : 512
 compatible_brands: qt 
 encoder : Lavf61.7.100
 Duration: N/A, start: 0.000000, bitrate: N/A
 Stream #0:0[0x1]: Video: prores (HQ) (apch / 0x68637061), yuv422p10le, 1920x1080, 15360 tbn (default)
 Metadata:
 handler_name : VideoHandler
 vendor_id : FFMP
$ 




Spent hours with all AI models, no help. Appeal to the human intelligence now


-
NumPy array of a video changes from the original after writing into the same video
29 mars 2021, par RashiqI have a video (
test.mkv
) that I have converted into a 4D NumPy array - (frame, height, width, color_channel). I have even managed to convert that array back into the same video (test_2.mkv
) without altering anything. However, after reading this new,test_2.mkv
, back into a new NumPy array, the array of the first video is different from the second video's array i.e. their hashes don't match and thenumpy.array_equal()
function returns false. I have tried using both python-ffmpeg and scikit-video but cannot get the arrays to match.

Python-ffmpeg attempt :


import ffmpeg
import numpy as np
import hashlib

file_name = 'test.mkv'

# Get video dimensions and framerate
probe = ffmpeg.probe(file_name)
video_stream = next((stream for stream in probe['streams'] if stream['codec_type'] == 'video'), None)
width = int(video_stream['width'])
height = int(video_stream['height'])
frame_rate = video_stream['avg_frame_rate']

# Read video into buffer
out, error = (
 ffmpeg
 .input(file_name, threads=120)
 .output("pipe:", format='rawvideo', pix_fmt='rgb24')
 .run(capture_stdout=True)
)

# Convert video buffer to array
video = (
 np
 .frombuffer(out, np.uint8)
 .reshape([-1, height, width, 3])
)

# Convert array to buffer
video_buffer = (
 np.ndarray
 .flatten(video)
 .tobytes()
)

# Write buffer back into a video
process = (
 ffmpeg
 .input('pipe:', format='rawvideo', s='{}x{}'.format(width, height))
 .output("test_2.mkv", r=frame_rate)
 .overwrite_output()
 .run_async(pipe_stdin=True)
)
process.communicate(input=video_buffer)

# Read the newly written video
out_2, error = (
 ffmpeg
 .input("test_2.mkv", threads=40)
 .output("pipe:", format='rawvideo', pix_fmt='rgb24')
 .run(capture_stdout=True)
)

# Convert new video into array
video_2 = (
 np
 .frombuffer(out_2, np.uint8)
 .reshape([-1, height, width, 3])
)

# Video dimesions change
print(f'{video.shape} vs {video_2.shape}') # (844, 1080, 608, 3) vs (2025, 1080, 608, 3)
print(f'{np.array_equal(video, video_2)}') # False

# Hashes don't match
print(hashlib.sha256(bytes(video_2)).digest()) # b'\x88\x00\xc8\x0ed\x84!\x01\x9e\x08 \xd0U\x9a(\x02\x0b-\xeeA\xecU\xf7\xad0xa\x9e\\\xbck\xc3'
print(hashlib.sha256(bytes(video)).digest()) # b'\x9d\xc1\x07xh\x1b\x04I\xed\x906\xe57\xba\xf3\xf1k\x08\xfa\xf1\xfaM\x9a\xcf\xa9\t8\xf0\xc9\t\xa9\xb7'



Scikit-video attempt :


import skvideo.io as sk
import numpy as np

video_data = sk.vread('test.mkv')

sk.vwrite('test_2_ski.mkv', video_data)

video_data_2 = sk.vread('test_2_ski.mkv')

# Dimensions match but...
print(video_data.shape) # (844, 1080, 608, 3)
print(video_data_2.shape) # (844, 1080, 608, 3)

# ...array elements don't
print(np.array_equal(video_data, video_data_2)) # False

# Hashes don't match either
print(hashlib.sha256(bytes(video_2)).digest()) # b'\x8b?]\x8epD:\xd9B\x14\xc7\xba\xect\x15G\xfaRP\xde\xad&EC\x15\xc3\x07\n{a[\x80'
print(hashlib.sha256(bytes(video)).digest()) # b'\x9d\xc1\x07xh\x1b\x04I\xed\x906\xe57\xba\xf3\xf1k\x08\xfa\xf1\xfaM\x9a\xcf\xa9\t8\xf0\xc9\t\xa9\xb7'



I don't understand where I'm going wrong and both the respective documentations do not highlight how to do this particular task. Any help is appreciated. Thank you.