
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (37)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
L’utiliser, en parler, le critiquer
10 avril 2011La première attitude à adopter est d’en parler, soit directement avec les personnes impliquées dans son développement, soit autour de vous pour convaincre de nouvelles personnes à l’utiliser.
Plus la communauté sera nombreuse et plus les évolutions seront rapides ...
Une liste de discussion est disponible pour tout échange entre utilisateurs.
Sur d’autres sites (6727)
-
How to save variables at start of script for use later if script needs to be re-run due to errors or bad user input
28 novembre 2023, par slyfox1186I have a script that uses GitHub's API to get the latest version number of the repositories that I am trying to download and then compile.


Due to the fact that without using a specialized token from GitHub you are only allowed 50 API calls a day vs the 5000 a day with the API user token.


I want to be able to parse all of the repositories and grab the version numbers that my script will then import into the code up front so in case someone who accidentally cancels the build in the middle of it (for who knows what reasons) wont have to eat up their 50 day API call allowance.


Essentially, store each repo's version number, if the user then needs to rerun the script and version numbers that have been saved so far will be skipped (thus eliminating an API call) and any numbers that are still needing to be sourced will be called and then stored for used in the script.


I am kinda of lost for a method on how to go about this.


Maybe some sort of external file can be generated ?


So what my script does is it builds FFmpeg from source code and all of the external libraries that you can link to it are also built from their latest source code.


The code calls the function
git_ver_fn
and passes arguments to it which are parsed inside the function and directed to another functionsgit_1_fn or git_2_fn
which passed those parsed arguments that have been passed on to the CURL command which changes the URL based on the arguments passed. It uses thejq
command to capture the GitHubversion number
anddownload link
for thetar.gz
file.

It is the version number I am and trying to figure out the best way to store in case the script fails and has to be rerun, which will eat up all of the 50 APT limit that GitHub imposes without a token. I can't post my token in the script because GitHub deactivates it and thus the users will be SOL if they need to run the script more than once.


curl_timeout='5'

git_1_fn()
{
 # SCRAPE GITHUB WEBSITE FOR LATEST REPO VERSION
 github_repo="$1"
 github_url="$2"

 if curl_cmd="$(curl -m "$curl_timeout" -sSL "https://api.github.com/repos/$github_repo/$github_url")"; then
 g_ver="$(echo "$curl_cmd" | jq -r '.[0].name')"
 g_ver="${g_ver#v}"
 g_ssl="$(echo "$curl_cmd" | jq -r '.[0].name')"
 g_ssl="${g_ssl#OpenSSL }"
 g_pkg="$(echo "$curl_cmd" | jq -r '.[0].name')"
 g_pkg="${g_pkg#pkg-config-}"
 g_url="$(echo "$curl_cmd" | jq -r '.[0].tarball_url')"
 fi
}

git_2_fn()
{
 videolan_repo="$1"
 videolan_url="$2"
 if curl_cmd="$(curl -m "$curl_timeout" -sSL "https://code.videolan.org/api/v4/projects/$videolan_repo/repository/$videolan_url")"; then
 g_ver="$(echo "$curl_cmd" | jq -r '.[0].commit.id')"
 g_sver="$(echo "$curl_cmd" | jq -r '.[0].commit.short_id')"
 g_ver1="$(echo "$curl_cmd" | jq -r '.[0].name')"
 g_ver1="${g_ver1#v}"
 fi
}

git_ver_fn()
{
 local v_flag v_tag url_tag

 v_url="$1"
 v_tag="$2"

 if [ -n "$3" ]; then v_flag="$3"; fi

 if [ "$v_flag" = 'B' ] && [ "$v_tag" = '2' ]; then
 url_tag='git_2_fn' gv_url='branches'
 fi

 if [ "$v_flag" = 'X' ] && [ "$v_tag" = '5' ]; then
 url_tag='git_5_fn'
 fi

 if [ "$v_flag" = 'T' ] && [ "$v_tag" = '1' ]; then
 url_tag='git_1_fn' gv_url='tags'
 elif [ "$v_flag" = 'T' ] && [ "$v_tag" = '2' ]; then
 url_tag='git_2_fn' gv_url='tags'
 fi

 if [ "$v_flag" = 'R' ] && [ "$v_tag" = '1' ]; then
 url_tag='git_1_fn'; gv_url='releases'
 elif [ "$v_flag" = 'R' ] && [ "$v_tag" = '2' ]; then
 url_tag='git_2_fn'; gv_url='releases'
 fi

 case "$v_tag" in
 2) url_tag='git_2_fn';;
 esac

 "$url_tag" "$v_url" "$gv_url" 2>/dev/null
}

# begin source code building
git_ver_fn 'freedesktop/pkg-config' '1' 'T'
if build 'pkg-config' "$g_pkg"; then
 download "https://pkgconfig.freedesktop.org/releases/$g_ver.tar.gz" "$g_ver.tar.gz"
 execute ./configure --silent --prefix="$workspace" --with-pc-path="$workspace"/lib/pkgconfig/ --with-internal-glib
 execute make -j "$cpu_threads"
 execute make install
 build_done 'pkg-config' "$g_pkg"
fi

git_ver_fn 'yasm/yasm' '1' 'T'
if build 'yasm' "$g_ver"; then
 download "https://github.com/yasm/yasm/releases/download/v$g_ver/yasm-$g_ver.tar.gz" "yasm-$g_ver.tar.gz"
 execute ./configure --prefix="$workspace"
 execute make -j "$cpu_threads"
 execute make install
 build_done 'yasm' "$g_ver"
fi



-
Xvfb and pulse audio not sync
14 décembre 2023, par Matrix 404I'm excited to introduce my new JavaScript server-side library called XFP Streamer, designed to handle recording and streaming Puppeteer window content. However, I'm currently facing an issue with audio synchronization, and I could really use some help from someone experienced with ffmpeg and recording in general.


The library's repository is available on GitHub, and I warmly welcome any contributions or assistance. Feel free to check it out at https://github.com/mboussaid/xfp-streamer.


Below is a simple example demonstrating how to record the Google website into a file.flv video file using XFP :


const XFP = require('./index');
XFP.onReady().then(async ()=>{
 // create new xfp instance
 const xfp = new XFP({
 debug:1
 });
 await xfp.onStart();
 // record everyting inside the file file.flv
 xfp.pipeToFile('file.flv',{
 debug:1
 })
 // xfp.pipeToRtmp('file.flv','RTMP LINK HERE')
 await xfp.onUseUrl('https://www.google.com') // navigate to google
 setTimeout(async ()=>{
 await xfp.onStop();
 },5000) // stop everyting after 5 seconds
},(missing)=>{
 // missing tools
 console.log('Missing tools',missing)
})



Please note that to ensure proper functionality, you will need to have the following tools installed :


pulseaudio
xvfb
ffmpeg
pactl
pacmd
Currently, I'm facing an issue with audio and video synchronization not working as expected. If you have experience with ffmpeg and recording, I would greatly appreciate your help in resolving this issue.


Thank you all for your support, and I look forward to your contributions !


Best regards,


-
Can't correctly decode an image frame using PyAV
17 avril 2023, par Martin BloreI'm trying to simply encode and decode a capture frame from the web-cam. I want to be able to send this over TCP but at the moment I'm having trouble performing this just locally.


Here's my code that simply takes the frame from the web-cam, encodes, then decodes, and displays the two images in a new window. The two images look like this :




Here's the code :


import struct
import cv2
import socket
import av
import time
import os

class PerfTimer:
 def __init__(self, name):
 self.name = name

 def __enter__(self):
 self.start_time = time.perf_counter()

 def __exit__(self, type, value, traceback):
 end_time = time.perf_counter()
 print(f"'{self.name}' taken:", end_time - self.start_time, "seconds.")

os.environ['AV_PYTHON_AVISYNTH'] = 'C:/ffmpeg/bin'

socket_enabled = False
sock = None
if socket_enabled:
 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
 print("Connecting to server...")
 sock.connect(('127.0.0.1', 8000))

# Set up video capture.
print("Opening web cam...")
cap = cv2.VideoCapture(0, cv2.CAP_DSHOW)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 800)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 600)

# Initialize the encoder.
encoder = av.CodecContext.create('h264', 'w')
encoder.width = 800
encoder.height = 600
encoder.pix_fmt = 'yuv420p'
encoder.bit_rate = 5000

# Initialize the decoder.
decoder = av.CodecContext.create('h264', 'r')
decoder.width = 800
decoder.height = 600
decoder.pix_fmt = 'yuv420p'
decoder.bit_rate = 5000

print("Streaming...")
while(cap.isOpened()):
 
 # Capture the frame from the camera.
 ret, orig_frame = cap.read()

 cv2.imshow('Source Video', orig_frame)

 # Convert to YUV.
 img_yuv = cv2.cvtColor(orig_frame, cv2.COLOR_BGR2YUV_I420)

 # Create a video frame object from the num py array.
 video_frame = av.VideoFrame.from_ndarray(img_yuv, format='yuv420p')

 with PerfTimer("Encoding") as p:
 encoded_frames = encoder.encode(video_frame)

 # Sometimes the encode results in no frames encoded, so lets skip the frame.
 if len(encoded_frames) == 0:
 continue

 print(f"Decoding {len(encoded_frames)} frames...")

 for frame in encoded_frames:
 encoded_frame_bytes = bytes(frame)

 if socket_enabled:
 # Get the size of the encoded frame in bytes
 size = struct.pack('code>