Recherche avancée

Médias (1)

Mot : - Tags -/portrait

Autres articles (76)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

Sur d’autres sites (12570)

  • Can't correctly decode an image frame using PyAV

    17 avril 2023, par Martin Blore

    I'm trying to simply encode and decode a capture frame from the web-cam. I want to be able to send this over TCP but at the moment I'm having trouble performing this just locally.

    


    Here's my code that simply takes the frame from the web-cam, encodes, then decodes, and displays the two images in a new window. The two images look like this :

    


    1

    


    Here's the code :

    


    import struct
import cv2
import socket
import av
import time
import os

class PerfTimer:
    def __init__(self, name):
        self.name = name

    def __enter__(self):
        self.start_time = time.perf_counter()

    def __exit__(self, type, value, traceback):
        end_time = time.perf_counter()
        print(f"'{self.name}' taken:", end_time - self.start_time, "seconds.")

os.environ['AV_PYTHON_AVISYNTH'] = 'C:/ffmpeg/bin'

socket_enabled = False
sock = None
if socket_enabled:
    sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    print("Connecting to server...")
    sock.connect(('127.0.0.1', 8000))

# Set up video capture.
print("Opening web cam...")
cap = cv2.VideoCapture(0, cv2.CAP_DSHOW)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 800)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 600)

# Initialize the encoder.
encoder = av.CodecContext.create('h264', 'w')
encoder.width = 800
encoder.height = 600
encoder.pix_fmt = 'yuv420p'
encoder.bit_rate = 5000

# Initialize the decoder.
decoder = av.CodecContext.create('h264', 'r')
decoder.width = 800
decoder.height = 600
decoder.pix_fmt = 'yuv420p'
decoder.bit_rate = 5000

print("Streaming...")
while(cap.isOpened()):
    
    # Capture the frame from the camera.
    ret, orig_frame = cap.read()

    cv2.imshow('Source Video', orig_frame)

    # Convert to YUV.
    img_yuv = cv2.cvtColor(orig_frame, cv2.COLOR_BGR2YUV_I420)

    # Create a video frame object from the num py array.
    video_frame = av.VideoFrame.from_ndarray(img_yuv, format='yuv420p')

    with PerfTimer("Encoding") as p:
        encoded_frames = encoder.encode(video_frame)

    # Sometimes the encode results in no frames encoded, so lets skip the frame.
    if len(encoded_frames) == 0:
        continue

    print(f"Decoding {len(encoded_frames)} frames...")

    for frame in encoded_frames:
        encoded_frame_bytes = bytes(frame)

        if socket_enabled:
            # Get the size of the encoded frame in bytes
            size = struct.pack('code>

    


  • I need to create a streaming app Where videos store in aws S3

    27 juillet 2023, par abuzar zaidi

    I need to create a video streaming app. where my videos will store in S3 and the video will stream from the backend. Right now I am trying to use ffmpeg in the backend but it does not work properly.

    


    what am I doing wrong in this code ? And if ffmpeg not support streaming video that store in aws s3 please suggest other options.

    


    Backend

    


    const express = require('express');
const aws = require('aws-sdk');
const ffmpeg = require('fluent-ffmpeg');
const cors = require('cors'); 
const app = express();

//   Set up AWS credentials
aws.config.update({
accessKeyId: '#######################',
secretAccessKey: '###############',
region: '###############3',
});

const s3 = new aws.S3();
app.use(cors());

app.get('/stream', (req, res) => {
const bucketName = '#######';
const key = '##############'; // Replace with the key/path of the video file in the S3 bucket
const params = { Bucket: bucketName, Key: key };
const videoStream = s3.getObject(params).createReadStream();

// Transcode to HLS format
 const hlsStream = ffmpeg(videoStream)
.format('hls')
.outputOptions([
  '-hls_time 10',
  '-hls_list_size 0',
  '-hls_segment_filename segments/segment%d.ts',
 ])
.pipe(res);

// Transcode to DASH format and pipe the output to the response
ffmpeg(videoStream)
.format('dash')
.outputOptions([
  '-init_seg_name init-stream$RepresentationID$.mp4',
  '-media_seg_name chunk-stream$RepresentationID$-$Number%05d$.mp4',
 ])
.output(res)
.run();
});

const port = 5000;
app.listen(port, () => {
console.log(`Server running on http://localhost:${port}`);
});


    


    Frontend

    


        import React from &#x27;react&#x27;;&#xA;&#xA;    const App = () => {&#xA;     const videoUrl = &#x27;http://localhost:3000/api/playlist/runPlaylist/6c3e7af45a3b8a5caf2fef17a42ef9a0&#x27;; //          Replace with your backend URL&#xA;Please list down solution or option i can use here&#xA;    const videoUrl = &#x27;http://localhost:5000/stream&#x27;;&#xA;         return (&#xA;      <div>&#xA;      <h1>Video Streaming Example</h1>&#xA;      <video controls="controls">&#xA;        <source src="{videoUrl}" type="video/mp4"></source>&#xA;        Your browser does not support the video tag.&#xA;      </video>&#xA;    </div>&#xA;     );&#xA;     };&#xA;&#xA;    export default App;&#xA;

    &#xA;

  • Can't correctly decode an image frame using OpenCV

    15 avril 2023, par Martin Blore

    I'm trying to simply encode and decode a capture frame from the web-cam. I want to be able to send this over TCP but at the moment I'm having trouble performing this just locally.

    &#xA;

    Here's my code that simply takes the frame from the web-cam, encodes, then decodes, and displays the two images in a new window. The two images look like this :

    &#xA;

    https://i.imgur.com/dGSlmrH.png

    &#xA;

    Here's the code :

    &#xA;

    import struct&#xA;import cv2&#xA;import socket&#xA;import av&#xA;import time&#xA;import os&#xA;&#xA;class PerfTimer:&#xA;    def __init__(self, name):&#xA;        self.name = name&#xA;&#xA;    def __enter__(self):&#xA;        self.start_time = time.perf_counter()&#xA;&#xA;    def __exit__(self, type, value, traceback):&#xA;        end_time = time.perf_counter()&#xA;        print(f"&#x27;{self.name}&#x27; taken:", end_time - self.start_time, "seconds.")&#xA;&#xA;os.environ[&#x27;AV_PYTHON_AVISYNTH&#x27;] = &#x27;C:/ffmpeg/bin&#x27;&#xA;&#xA;socket_enabled = False&#xA;sock = None&#xA;if socket_enabled:&#xA;    sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)&#xA;    print("Connecting to server...")&#xA;    sock.connect((&#x27;127.0.0.1&#x27;, 8000))&#xA;&#xA;# Set up video capture.&#xA;print("Opening web cam...")&#xA;cap = cv2.VideoCapture(0, cv2.CAP_DSHOW)&#xA;cap.set(cv2.CAP_PROP_FRAME_WIDTH, 800)&#xA;cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 600)&#xA;&#xA;# Initialize the encoder.&#xA;encoder = av.CodecContext.create(&#x27;h264&#x27;, &#x27;w&#x27;)&#xA;encoder.width = 800&#xA;encoder.height = 600&#xA;encoder.pix_fmt = &#x27;yuv420p&#x27;&#xA;encoder.bit_rate = 5000&#xA;&#xA;# Initialize the decoder.&#xA;decoder = av.CodecContext.create(&#x27;h264&#x27;, &#x27;r&#x27;)&#xA;decoder.width = 800&#xA;decoder.height = 600&#xA;decoder.pix_fmt = &#x27;yuv420p&#x27;&#xA;decoder.bit_rate = 5000&#xA;&#xA;print("Streaming...")&#xA;while(cap.isOpened()):&#xA;    &#xA;    # Capture the frame from the camera.&#xA;    ret, orig_frame = cap.read()&#xA;&#xA;    cv2.imshow(&#x27;Source Video&#x27;, orig_frame)&#xA;&#xA;    # Convert to YUV.&#xA;    img_yuv = cv2.cvtColor(orig_frame, cv2.COLOR_BGR2YUV_I420)&#xA;&#xA;    # Create a video frame object from the num py array.&#xA;    video_frame = av.VideoFrame.from_ndarray(img_yuv, format=&#x27;yuv420p&#x27;)&#xA;&#xA;    with PerfTimer("Encoding") as p:&#xA;        encoded_frames = encoder.encode(video_frame)&#xA;&#xA;    # Sometimes the encode results in no frames encoded, so lets skip the frame.&#xA;    if len(encoded_frames) == 0:&#xA;        continue&#xA;&#xA;    print(f"Decoding {len(encoded_frames)} frames...")&#xA;&#xA;    for frame in encoded_frames:&#xA;        encoded_frame_bytes = bytes(frame)&#xA;&#xA;        if socket_enabled:&#xA;            # Get the size of the encoded frame in bytes&#xA;            size = struct.pack(&#x27;code>

    &#xA;