Recherche avancée

Médias (91)

Autres articles (65)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (10652)

  • FFMPEG & YouTube Live - "Bad Video Settings" - Please use a keyframe frequency of four seconds or less

    28 janvier 2020, par John Doe

    Trying to live stream to YouTube and from my perspective, everything seems to be working fine. However YouTube keeps giving me the following message :

    Bad Video Settings

    Please use a keyframe frequency of four seconds or less. Currently, keyframes are not being sent often enough, which will cause buffering. The current keyframe frequency is 8.4 seconds. Note that ingestion errors can cause incorrect GOP (group of pictures) sizes.

    I’ve dug around for hours and so far nothing seems to be making any difference. I added -g 60 and as I didn’t fully understand I also tried adding -g 2 but neither worked. Here is the command I’m currently using :

    ffmpeg -re -f concat -safe 0 -i "concat.txt" -c copy -preset veryfast -maxrate 1200k -bufsize 2400k -framerate 30 -g 60 -f flv rtmp://a.rtmp.youtube.com/live2/XXXX-XXXX-XXXX-XXXX
  • moov atom not found (Extracting unique faces from youtube video)

    10 avril 2023, par Tochukwu

    I got the error below

    


    Saved 0 unique faces
[mov,mp4,m4a,3gp,3g2,mj2 @ 0000024f505224c0] moov atom not found


    


    Trying to extract unique faces from a YouTube video with the code below which is designed to download the YouTube video and extract unique faces into a folder named faces. I got an empty video and folder. Please do check the Python code below

    


    import os
import urllib.request
import cv2
import face_recognition
import numpy as np

# Step 1: Download the YouTube video
video_url = "https://www.youtube.com/watch?v=JriaiYZZhbY&t=4s"
urllib.request.urlretrieve(video_url, "video.mp4")

# Step 2: Extract frames from the video
cap = cv2.VideoCapture("video.mp4")
frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
frames = []
for i in range(frame_count):
    cap.set(cv2.CAP_PROP_POS_FRAMES, i)
    ret, frame = cap.read()
    if ret:
        frames.append(frame)
cap.release()

# Step 3: Detect faces in the frames
detected_faces = []
for i, frame in enumerate(frames):
    face_locations = face_recognition.face_locations(frame)
    for j, location in enumerate(face_locations):
        top, right, bottom, left = location
        face_image = frame[top:bottom, left:right]
        cv2.imwrite(f"detected_{i}_{j}.jpg", face_image)
        detected_faces.append(face_image)

# Step 4: Save the faces as separate images
if not os.path.exists("faces"):
    os.makedirs("faces")
known_faces = []
for i in range(len(detected_faces)):
    face_image = detected_faces[i]
    face_encoding = face_recognition.face_encodings(face_image)[0]
    known_faces.append(face_encoding)
    cv2.imwrite(f"faces/face_{i}.jpg", face_image)
print("Saved", len(known_faces), "unique faces")


    


  • No sound when running ffmpeg on youtube live

    28 mai 2020, par Bartonsen

    Despite my limited knowledge in ffmpeg, I've managed to livestream my birdbox camera to youtube using ffmpeg running on a raspberry pi. The camera has also audio and by using local vlc in windows with rtsp, the audio is ok.

    



    However, on youtube there is no sound (same rtsp command as used locally in windows), and I see this "warning" in youtube studio : "The current bitrate (0) of the audio stream is lower than the recommended bitrate. We recommend using a 128 Kbps bitrate for the audio stream."

    



    How can I get the sound through youtube ?
This is the command I run. The command was found on the net, and I adopted it for my usage, and got video working straight away :

    



    pi@raspberrypi:~ $ ffmpeg -f lavfi -i anullsrc -thread_queue_size 512 -rtsp_transport udp -i "rtsp://10.x.x.x:554/user=user&password=password&channel=1&stream=0.sdp?real_stream" -tune zerolatency -vcodec libx264 -use_wallclock_as_timestamps 1 -pix_fmt + -c:v copy -c:a aac -strict experimental -f flv rtmp://a.rtmp.youtube.com/live2/mykey
ffmpeg version git-2020-05-01-3c740f2 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 6.3.0 (Raspbian 6.3.0-18+rpi1+deb9u1) 20170516
  configuration: --arch=armel --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree
  libavutil      56. 43.100 / 56. 43.100
  libavcodec     58. 82.100 / 58. 82.100
  libavformat    58. 42.102 / 58. 42.102
  libavdevice    58.  9.103 / 58.  9.103
  libavfilter     7. 80.100 /  7. 80.100
  libswscale      5.  6.101 /  5.  6.101
  libswresample   3.  6.100 /  3.  6.100
  libpostproc    55.  6.100 / 55.  6.100
Input #0, lavfi, from 'anullsrc':
  Duration: N/A, start: 0.000000, bitrate: 705 kb/s
    Stream #0:0: Audio: pcm_u8, 44100 Hz, stereo, u8, 705 kb/s
Guessed Channel Layout for Input Stream #1.1 : mono
Input #1, rtsp, from 'rtsp://10.x.x.x:554/user=user&password=password&channel=1&stream=0.sdp?real_stream':
  Metadata:
    title           : RTSP Session
  Duration: N/A, start: 0.000000, bitrate: N/A
    Stream #1:0: Video: h264 (Main), yuvj420p(pc, bt709, progressive), 1920x1080, 20 fps, 20 tbr, 90k tbn, 180k tbc
    Stream #1:1: Audio: pcm_alaw, 8000 Hz, mono, s16, 64 kb/s
Multiple -c, -codec, -acodec, -vcodec, -scodec or -dcodec options specified for stream 0, only the last option '-c:v copy' will be used.
Stream mapping:
  Stream #1:0 -> #0:0 (copy)
  Stream #0:0 -> #0:1 (pcm_u8 (native) -> aac (native))
Press [q] to stop, [?] for help
Output #0, flv, to 'rtmp://a.rtmp.youtube.com/live2/mykey':
  Metadata:
    encoder         : Lavf58.42.102
    Stream #0:0: Video: h264 (Main) ([7][0][0][0] / 0x0007), yuvj420p(pc, bt709, progressive), 1920x1080, q=2-31, 20 fps, 20 tbr, 1k tbn, 90k tbc
    Stream #0:1: Audio: aac (LC) ([10][0][0][0] / 0x000A), 44100 Hz, stereo, fltp, 128 kb/s
    Metadata:
      encoder         : Lavc58.82.100 aac
[flv @ 0x2c43750] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly