Recherche avancée

Médias (0)

Mot : - Tags -/objet éditorial

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (15)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (4905)

  • iOS Radio App : Need to extract and stream audio-only from HLS streams with video content

    20 décembre 2024, par Bader Alghamdi

    I'm developing an iOS radio app that plays various HLS streams. The challenge is that some stations broadcast HLS streams containing both audio and video (example : https://svs.itworkscdn.net/smcwatarlive/smcwatar/chunks.m3u8), but I want to :

    


    Extract and play only the audio track
Support AirPlay for audio-only streaming
Minimize data usage by not downloading video content
Technical Details :

    


    iOS 17+
Swift 6
Using AVFoundation for playback
Current implementation uses AVPlayer with AVPlayerItem
Current Code Structure :

    


    class StreamPlayer: ObservableObject { @Published var isPlaying = false private var player: AVPlayer? private var playerItem: AVPlayerItem?

func playStream(url: URL) {
    let asset = AVURLAsset(url: url)
    playerItem = AVPlayerItem(asset: asset)
    player = AVPlayer(playerItem: playerItem)
    player?.play()
}



    


    Stream Analysis : When analyzing the video stream using FFmpeg :

    


    CopyInput #0, hls, from 'https://svs.itworkscdn.net/smcwatarlive/smcwatar/chunks.m3u8':
  Stream #0:0: Video: h264, yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 25 fps
  Stream #0:1: Audio: aac, 44100 Hz, stereo, fltp



    


    Attempted Solutions :

    


    Using MobileFFmpeg :

    


    let command = [
    "-i", streamUrl,
    "-vn",
    "-acodec", "aac",
    "-ac", "2",
    "-ar", "44100",
    "-b:a", "128k",
    "-f", "mpegts",
    "udp://127.0.0.1:12345"
].joined(separator: " ")

ffmpegProcess = MobileFFmpeg.execute(command)

I


    


    ssue : While FFmpeg successfully extracts audio, playback through AVPlayer doesn't work reliably.

    


    Tried using HLS output :

    


    let command = [
    "-i", streamUrl,
    "-vn",
    "-acodec", "aac",
    "-ac", "2",
    "-ar", "44100",
    "-b:a", "128k",
    "-f", "hls",
    "-hls_time", "2",
    "-hls_list_size", "3",
    outputUrl.path
]



    


    Issue : Creates temporary files but faces synchronization issues with live streams.

    



    


    Testing URLs :

    


    Audio+Video : https://svs.itworkscdn.net/smcwatarlive/smcwatar/chunks.m3u8
Audio Only : https://mbcfm-radio.mbc.net/mbcfm-radio.m3u8

    



    


    Requirements :

    


      

    • Real-time audio extraction from HLS stream
    • 


    • Maintain live streaming capabilities
    • 


    • Full AirPlay support
    • 


    • Minimal data usage (avoid downloading video content)
    • 


    • Handle network interruptions gracefully
    • 


    


    Questions :

    


      

    • What's the most efficient way to extract only audio from an HLS stream in real-time ?
    • 


    • Is there a way to tell AVPlayer to ignore video tracks completely ?
    • 


    • Are there better alternatives to FFmpeg for this specific use case ?
    • 


    • What's the recommended approach for handling AirPlay with modified streams ?
    • 


    


    Any guidance or alternative approaches would be greatly appreciated. Thank you !

    


    What I Tried :

    


      

    1. Direct AVPlayer Implementation :
    2. 


    


      

    • Used standard AVPlayer to play HLS stream
    • 


    • Expected it to allow selecting audio-only tracks
    • 


    • Result : Always downloads both video and audio, consuming unnecessary bandwidth
    • 


    



    

      

    1. FFmpeg Audio Extraction :
    2. 


    


    let command = [
    "-i", "https://svs.itworkscdn.net/smcwatarlive/smcwatar/chunks.m3u8",
    "-vn",                    // Remove video
    "-acodec", "aac",        // Audio codec
    "-ac", "2",              // 2 channels
    "-ar", "44100",          // Sample rate
    "-b:a", "128k",          // Bitrate
    "-f", "mpegts",          // Output format
    "udp://127.0.0.1:12345"  // Local stream
]
ffmpegProcess = MobileFFmpeg.execute(command)



    


    Expected : Clean audio stream that AVPlayer could play
Result : FFmpeg extracts audio but AVPlayer can't play the UDP stream

    



    

      

    1. HLS Segmented Approach :
    2. 


    


    swiftCopylet command = [
    "-i", streamUrl,
    "-vn",
    "-acodec", "aac",
    "-f", "hls",
    "-hls_time", "2",
    "-hls_list_size", "3",
    outputUrl.path
]



    


    Expected : Create local HLS playlist with audio-only segments
Result : Creates files but faces sync issues with live stream

    



    


    Expected Behavior :

    


      

    • Stream plays audio only
    • 


    • Minimal data usage (no video download)
    • 


    • Working AirPlay support
    • 


    • Real-time playback without delays
    • 


    


    Actual Results :

    


      

    • Either downloads full video stream (wasteful)
    • 


    • Or fails to play extracted audio
    • 


    • AirPlay issues with modified streams
    • 


    • Sync problems with live content
    • 


    


  • Scalable Webinar Features Using Open-Source Tools ? [closed]

    31 janvier, par Firas Ben said

    I am searching for a scalable webinar solution that can handle 1000+ concurrent users. I have explored platforms like BigBlueButton but encountered limitations with scalability in real-world scenarios.

    


    My requirements include :

    


      

    • Support for RTMP and HLS streaming.
    • 


    • Chat and screen-sharing functionalities.
    • 


    • Ability to integrate with custom APIs.
    • 


    


    I’d like to know how to address these challenges using open-source tools. For instance :

    


      

    • What configurations are necessary to scale tools like BigBlueButton for large audiences ?
    • 


    • Are there specific architectural patterns or server setups recommended for handling this user load ?
    • 


    


    Any guidance or examples would be appreciated.

    


  • Unable to retrieve video stream from RTSP URL inside Docker container

    6 février, par birdalugur

    I have a FastAPI application running inside a Docker container that is trying to stream video from an RTSP camera URL using OpenCV. The setup works fine locally, but when running inside Docker, the /video endpoint does not return a stream and times out. Below are the details of the issue.

    


    Docker Setup :

    


    Dockerfile :

    


    FROM python:3.10.12

RUN apt-get update && apt-get install -y \
    libgl1-mesa-glx \
    libglib2.0-0

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

CMD ["python", "app.py"]



    


      

    • Docker Compose :
    • 


    


    services:
  api:
    build: ./api
    ports:
      - "8000:8000"
    depends_on:
      - redis
      - mongo
    networks:
      - app_network
    volumes:
      - ./api:/app
    environment:
      - REDIS_HOST=redis
      - REDIS_PORT=6379
      - MONGO_URI=mongodb://mongo:27017/app_db

  frontend:
    build: ./frontend
    ports:
      - "3000:3000"
    depends_on:
      - api
    networks:
      - app_network
    volumes:
      - ./frontend:/app
      - /app/node_modules

redis:
    image: "redis:alpine"
    restart: always
    networks:
      - app_network
    volumes:
      - redis_data:/data

  mongo:
    image: "mongo:latest"
    restart: always
    networks:
      - app_network
    volumes:
      - mongo_data:/data/db

networks:
  app_network:
    driver: bridge

volumes:
  redis_data:
  mongo_data:



    


    Issue :

    


    When I try to access the /video endpoint, the following warnings appear :

    


    [ WARN:0@46.518] global cap_ffmpeg_impl.hpp:453 _opencv_ffmpeg_interrupt_callback Stream timeout triggered after 30037.268665 ms


    


    However, locally, the RTSP stream works fine using OpenCV with the same code.

    


    Additional Information :

    


      

    1. Network : The Docker container can successfully ping the camera IP (10.100.10.94).
    2. 


    3. Local Video : I can read frames from a local video file without issues.
    4. 


    5. RTSP Stream : I am able to access the RTSP stream directly using OpenCV locally, but not inside the Docker container.
    6. 


    


    Code :

    


    Here's the relevant part of the code in my api/app.py :

    


    import cv2
from fastapi import FastAPI
from fastapi.responses import StreamingResponse

RTSP_URL = "rtsp://deneme:155115@10.100.10.94:554/axis-media/media.amp?adjustablelivestream=1&fps=10"

def generate_frames():
    cap = cv2.VideoCapture(RTSP_URL)
    if not cap.isOpened():
        print("Failed to connect to RTSP stream.")
        return

    while True:
        success, frame = cap.read()
        if not success:
            print("Failed to capture frame.")
            break

        _, buffer = cv2.imencode(".jpg", frame)
        frame_bytes = buffer.tobytes()

        yield (
            b"--frame\r\n" b"Content-Type: image/jpeg\r\n\r\n" + frame_bytes + b"\r\n"
        )

    cap.release()

@app.get("/video")
async def video_feed():
    """Return MJPEG stream to the browser."""
    return StreamingResponse(
        generate_frames(), media_type="multipart/x-mixed-replace; boundary=frame"
    )


    


    Has anyone faced similar issues or have suggestions on how to resolve this ?