
Recherche avancée
Autres articles (98)
-
Soumettre améliorations et plugins supplémentaires
10 avril 2011Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (17218)
-
ffmpeg split_by_silence output flac have broken metadata [duplicate]
20 mars 2023, par MartinInput flac File : https://file.io/D1EXEgJ0OtgM


I have a shell script
split_by_silence.sh
which takes an input audio file, detects split points where silence (or very low volume audio) occurs, and then splits the audio file using ffmpeg into the individual split point segments.

So if 10 split points = 11 audio files exported.


When I run it on an mp3, it produces 11 files (as expected with my input file) and each file has correct length metadata.


But when ffmpeg splits a flac file, the metadata output for 'length' is broken


I've tried following the closest question, marked as a duplicate, here :
Cutting FLAC using ffmpeg does not change timestamps accordingly


By adding a step to re-encode my flac file in ffmpeg :


ffmpeg -i full.flac -ss 0 -t 2241 fixed.flac


Which I'm not sure if is working, but outputs a fixed.flac file. Which when I run with my split command :


ffmpeg -i "inputFile.flac" -c copy -map 0 -f segment -segment_times "22,441,5567,1122,etc..." "%d_output.flac"


The output files still have broken flac 'length' metadata, like you can see in my image, even though when opened in audacity, the file does have a real length. ffmpeg just encoded the wrong value ?




This is my current ffmpeg version :


$ ffmpeg
ffmpeg version 4.4.2-0ubuntu0.22.04.1 Copyright (c) 2000-2021 the FFmpeg developers
 built with gcc 11 (Ubuntu 11.2.0-19ubuntu1)



And here is my .sh file I run with


$ ./split_by_silence.sh


# -----------------------
# SPLIT BY SILENCE
# Requirements:
# ffmpeg
# $ apt-get install bc
# How To Run:
# $ ./split_by_silence.sh "full_lowq.flac" %03d_output.flac

# output title format
OUTPUTTITLE="%d_output.flac"
# input audio filepath
IN="/mnt/c/Users/marti/Documents/projects/split_by_silence2/full.flac"
# output audio filepath
OUTPUTFILEPATH="/mnt/c/Users/marti/Documents/projects/split_by_silence2"
# ffmpeg option: split input audio based on this silencedetect value
SD_PARAMS="-11dB"
MIN_FRAGMENT_DURATION=120 # split option: minimum fragment duration
export MIN_FRAGMENT_DURATION

# -----------------------
# step: ffmpeg
# goal: get comma separated list of split points (use ffmpeg to determine points where audio is at SD_PARAMS [-18db] )

echo "_______________________"
echo "Determining split points..."
SPLITS=$(
 ffmpeg -v warning -i "$IN" -af silencedetect="$SD_PARAMS",ametadata=mode=print:file=-:key=lavfi.silence_start -vn -sn -f s16le -y /dev/null \
 | grep lavfi.silence_start= \
 | cut -f 2-2 -d= \
 | perl -ne '
 our $prev;
 INIT { $prev = 0.0; }
 chomp;
 if (($_ - $prev) >= $ENV{MIN_FRAGMENT_DURATION}) {
 print "$_,";
 $prev = $_;
 }
 ' \
 | sed 's!,$!!'
)
echo "split points list= $SPLITS"

# add '5.5' to each split\
IFS=',' read -ra SPLITS_ARRAY <<< "$SPLITS"
for i in "${!SPLITS_ARRAY[@]}"; do
 SPLITS_ARRAY[i]=$(echo "${SPLITS_ARRAY[i]}+5.5" | bc)
done

SPLITS=$(IFS=','; echo "${SPLITS_ARRAY[*]}")
echo "SPLITS=$SPLITS"

# using the split points list, calculate how many output audio files will be created 
num=0
res="${SPLITS//[^,]}"
CHARCOUNT="${#res}"
num=$((CHARCOUNT + 2))
echo "_______________________"
echo "Exporting $num tracks with ffmpeg"

ffmpeg -i "$IN" -c copy -map 0 -f segment -segment_times "$SPLITS" "$OUTPUTFILEPATH/$OUTPUTTITLE"

echo "Done."

echo "------------------------------------------------"
echo "$num TRACKS EXPORTED"



I think this is an open defect ?
https://trac.ffmpeg.org/ticket/4905


-
Not able to merge init.mp4 and seg-*.m4s with ffmpeg and python due to its file format difference
1er février 2023, par XiBBaLI'm developing video downloader (only for free videos) for korean anime streaming site https://laftel.net/


I guess laftel.net uses mpeg-dash for their streaming.
I found "init.mp4" file and "segments-number.m4s" files in chrome developer tools.


Code below downloads seg-1.m4s (name of the first segment) to seg-239.m4s (name of the last segment) file and init.mp4 file and it works.



(Skip the beginning becasue there is a code that collects "Request-URL" for each .m4s files in network stream)


#variable "found" is part of the "Request-URL"

def curl_m4s():
 for i in range(1,240): #number of the .m4s segment file is constant at 239
 vid_url = f"https://mediacloud.laftel.net/{found}/video/avc1/2/seg-{i}.m4s"
 aud_url = f"https://mediacloud.laftel.net/{found}/audio/mp4a/eng/seg-{i}.m4s"
 
 if i < 10: #for single digit num
 os.system(f"curl {vid_url} > {location}/vids/vid00{i}.m4s")
 os.system(f"curl {aud_url} > {location}/auds/aud00{i}.m4s")
 sleep(random.randint(1,3))
 
 elif i < 100: #for double digit num
 os.system(f"curl {vid_url} > {location}/vids/vid0{i}.m4s")
 os.system(f"curl {aud_url} > {location}/auds/aud0{i}.m4s")
 sleep(random.randint(1,3))
 
 else: #for three digit num
 os.system(f"curl {vid_url} > {location}/vids/vid{i}.m4s")
 os.system(f"curl {aud_url} > {location}/auds/aud{i}.m4s")
 sleep(random.randint(1,3))

location = os.getcwd()

os.system(f"curl https://mediacloud.laftel.net/{found}/video/avc1/2/init.mp4 > {location}/vids/vid_init.mp4")
os.system(f"curl https://mediacloud.laftel.net/{found}/audio/mp4a/eng/init.mp4 > {location}/auds/aud_init.mp4")
os.system(f"curl https://mediacloud.laftel.net/{found}/stream.mpd > {location}/stream.mpd")

curl_m4s()
#use curl in cmd and downloads each of the .m4s (001 ~ 239)

 



So in the vids folder i have "init.mp4" for video and "seg-1.m4s seg-239.m4s" for video
and in the auds folder i have "init.mp4" for audio and "seg-1.m4s seg-239.m4s" for audio


Problem is, I cannot merge init file and segment files.
I have no idea about combining .mp4 and .m4s together.
There are a lot of example codes for merging init.m4s + seg-.m4s but I couldn't find init.mp4 + seg-.mp4 codes.


I tried to merge segments together first, like this and it works.


location = os.getcwd()

os.system(f"cd {location}")
os.system("copy /b vid*.m4s vid_full.m4s")



and now, i want to merge init.mp4 becasue it has very much information about whole video files.
But how ??


I tried these and none of them worked (looks like Successfull merge but it do not contain any watchable video)


# vid_full is merged segment files (seg-1.m4s + ... + seg-239.m4s)
 
1. 
os.system("copy /b init.mp4 + vid*.m4s video_full.m4s")
os.system("ffmpeg -i video_full.m4s -c:a copy video_full.mp4")


2. 
os.system("type vid_init.mp4 index.txt > Filename.mp4")


3. 
import os 

os.system("ffmpeg -i vid_full.m4s -c:a copy vid_full.mp4")
os.system("copy /b vid_init.mp4 + vid_full.mp4 VIDEO.mp4")



All problem occurs the format of the init file is .mp4. If it was .m4s I guess I could merge it.


and I guess i must merge init file and segments files together both in .m4s format. Is it right ?
Via ffmpeg, i couldn't encode init.mp4 to init.m4s so this is the problme also.


So, Please help me merge init.mp4 and segments.m4s


All methods are wellcome at least it is based on python.


- 

- I tried to merge init.mp4 + merged_segment.m4s but it failed
- I tried to convert init.mp4 to init.m4s via ffmpeg but it failed
- I tried to convert all the segments to .mp4 files and merge into init.mp4 but it failed








I want to merge init file and segment files and make playable video.
Pleas Teach me how to do this.


Links below are what I used for my test :


[Laftel.net URL]
https://laftel.net/player/40269/46123


[Request URL for init.mp4 (video)]
https://mediacloud.laftel.net/2021/04/46773/v15/video/dash/video/avc1/2/init.mp4


[Request URL for init.mp4 (audio)]
https://mediacloud.laftel.net/2021/04/46773/v15/video/dash/audio/mp4a/eng/init.mp4


[Request URL for seg-1.m4s (video)]
https://mediacloud.laftel.net/2021/04/46773/v15/video/dash/video/avc1/2/seg-1.m4s


[Request URL for seg-1.m4s (audio)]
https://mediacloud.laftel.net/2021/04/46773/v15/video/dash/audio/mp4a/eng/seg-1.m4s


-
Extracting Metadata from a video file (.mkv) using FFmpeg
14 avril 2023, par Ashutosh SinglaI have a video file that contains 4 streams, 3 videos streams and one steam for metadata.


Stream Info :


Input #0, matroska,webm, from 'output_master.mkv':
Metadata:
title : Azure Kinect
encoder : libmatroska-1.4.9
creation_time : 2021-05-20T12:11:15.000000Z
K4A_DEPTH_DELAY_NS: 0
K4A_WIRED_SYNC_MODE: MASTER
K4A_COLOR_FIRMWARE_VERSION: 1.6.110
K4A_DEPTH_FIRMWARE_VERSION: 1.6.79
K4A_DEVICE_SERIAL_NUMBER: 000123102712
K4A_START_OFFSET_NS: 298800000
Duration: 00:00:40.03, start: 0.000000, bitrate: 480934 kb/s

Stream #0:0(eng): Video: mjpeg (Baseline) (MJPG / 0x47504A4D), yuvj422p(pc, bt470bg/unknown/unknown), 2048x1536, SAR 1:1 DAR 4:3, 30 fps, 30 tbr, 1000k tbn (default)
Metadata:
 title : COLOR
 K4A_COLOR_TRACK : 14499183330009048
 K4A_COLOR_MODE : MJPG_1536P
Stream #0:1(eng): Video: rawvideo (b16g / 0x67363162), gray16be, 640x576, SAR 1:1 DAR 10:9, 30 fps, 30 tbr, 1000k tbn (default)
Metadata:
 title : DEPTH
 K4A_DEPTH_TRACK : 429408169412322196
 K4A_DEPTH_MODE : NFOV_UNBINNED
Stream #0:2(eng): Video: rawvideo (b16g / 0x67363162), gray16be, 640x576, SAR 1:1 DAR 10:9, 30 fps, 30 tbr, 1000k tbn (default)
Metadata:
 title : IR
 K4A_IR_TRACK : 194324406376800992
 K4A_IR_MODE : ACTIVE
Stream #0:3: Attachment: none
Metadata:
 filename : calibration.json
 mimetype : application/octet-stream
 K4A_CALIBRATION_FILE: calibration.json



I am using the below command to extract
Stream #0:0
,Stream #0:1
, andStream #0:2
by changingmap 0:X
.

ffmpeg -i output_master.mkv -c copy -allow_raw_vfw 1 -map 0:0 temp_0.mkv 



To extract the metadata from all the streams and store them in metadata.txt, I am using the command below :


ffprobe -v quiet -show_format -show_streams -print_format json output_master.mkv > metadata.txt



What should be the command to extract
Stream #0:3
? Any help would be appreciated.