
Recherche avancée
Médias (91)
-
Spoon - Revenge !
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
My Morning Jacket - One Big Holiday
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Zap Mama - Wadidyusay ?
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
David Byrne - My Fair Lady
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Beastie Boys - Now Get Busy
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Granite de l’Aber Ildut
9 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
Autres articles (84)
-
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP Player : problèmes potentiels
22 février 2011, parLe lecteur ne fonctionne pas sur Internet Explorer
Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (4816)
-
Can't correctly decode an image frame using PyAV
17 avril 2023, par Martin BloreI'm trying to simply encode and decode a capture frame from the web-cam. I want to be able to send this over TCP but at the moment I'm having trouble performing this just locally.


Here's my code that simply takes the frame from the web-cam, encodes, then decodes, and displays the two images in a new window. The two images look like this :




Here's the code :


import struct
import cv2
import socket
import av
import time
import os

class PerfTimer:
 def __init__(self, name):
 self.name = name

 def __enter__(self):
 self.start_time = time.perf_counter()

 def __exit__(self, type, value, traceback):
 end_time = time.perf_counter()
 print(f"'{self.name}' taken:", end_time - self.start_time, "seconds.")

os.environ['AV_PYTHON_AVISYNTH'] = 'C:/ffmpeg/bin'

socket_enabled = False
sock = None
if socket_enabled:
 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
 print("Connecting to server...")
 sock.connect(('127.0.0.1', 8000))

# Set up video capture.
print("Opening web cam...")
cap = cv2.VideoCapture(0, cv2.CAP_DSHOW)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 800)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 600)

# Initialize the encoder.
encoder = av.CodecContext.create('h264', 'w')
encoder.width = 800
encoder.height = 600
encoder.pix_fmt = 'yuv420p'
encoder.bit_rate = 5000

# Initialize the decoder.
decoder = av.CodecContext.create('h264', 'r')
decoder.width = 800
decoder.height = 600
decoder.pix_fmt = 'yuv420p'
decoder.bit_rate = 5000

print("Streaming...")
while(cap.isOpened()):
 
 # Capture the frame from the camera.
 ret, orig_frame = cap.read()

 cv2.imshow('Source Video', orig_frame)

 # Convert to YUV.
 img_yuv = cv2.cvtColor(orig_frame, cv2.COLOR_BGR2YUV_I420)

 # Create a video frame object from the num py array.
 video_frame = av.VideoFrame.from_ndarray(img_yuv, format='yuv420p')

 with PerfTimer("Encoding") as p:
 encoded_frames = encoder.encode(video_frame)

 # Sometimes the encode results in no frames encoded, so lets skip the frame.
 if len(encoded_frames) == 0:
 continue

 print(f"Decoding {len(encoded_frames)} frames...")

 for frame in encoded_frames:
 encoded_frame_bytes = bytes(frame)

 if socket_enabled:
 # Get the size of the encoded frame in bytes
 size = struct.pack('code>


-
I need to create a streaming app Where videos store in aws S3
27 juillet 2023, par abuzar zaidiI need to create a video streaming app. where my videos will store in S3 and the video will stream from the backend. Right now I am trying to use ffmpeg in the backend but it does not work properly.


what am I doing wrong in this code ? And if ffmpeg not support streaming video that store in aws s3 please suggest other options.


Backend


const express = require('express');
const aws = require('aws-sdk');
const ffmpeg = require('fluent-ffmpeg');
const cors = require('cors'); 
const app = express();

// Set up AWS credentials
aws.config.update({
accessKeyId: '#######################',
secretAccessKey: '###############',
region: '###############3',
});

const s3 = new aws.S3();
app.use(cors());

app.get('/stream', (req, res) => {
const bucketName = '#######';
const key = '##############'; // Replace with the key/path of the video file in the S3 bucket
const params = { Bucket: bucketName, Key: key };
const videoStream = s3.getObject(params).createReadStream();

// Transcode to HLS format
 const hlsStream = ffmpeg(videoStream)
.format('hls')
.outputOptions([
 '-hls_time 10',
 '-hls_list_size 0',
 '-hls_segment_filename segments/segment%d.ts',
 ])
.pipe(res);

// Transcode to DASH format and pipe the output to the response
ffmpeg(videoStream)
.format('dash')
.outputOptions([
 '-init_seg_name init-stream$RepresentationID$.mp4',
 '-media_seg_name chunk-stream$RepresentationID$-$Number%05d$.mp4',
 ])
.output(res)
.run();
});

const port = 5000;
app.listen(port, () => {
console.log(`Server running on http://localhost:${port}`);
});



Frontend


import React from 'react';

 const App = () => {
 const videoUrl = 'http://localhost:3000/api/playlist/runPlaylist/6c3e7af45a3b8a5caf2fef17a42ef9a0'; // Replace with your backend URL
Please list down solution or option i can use here
 const videoUrl = 'http://localhost:5000/stream';
 return (
 <div>
 <h1>Video Streaming Example</h1>
 <video controls="controls">
 <source src="{videoUrl}" type="video/mp4"></source>
 Your browser does not support the video tag.
 </video>
 </div>
 );
 };

 export default App;



-
Can't correctly decode an image frame using OpenCV
15 avril 2023, par Martin BloreI'm trying to simply encode and decode a capture frame from the web-cam. I want to be able to send this over TCP but at the moment I'm having trouble performing this just locally.


Here's my code that simply takes the frame from the web-cam, encodes, then decodes, and displays the two images in a new window. The two images look like this :


https://i.imgur.com/dGSlmrH.png


Here's the code :


import struct
import cv2
import socket
import av
import time
import os

class PerfTimer:
 def __init__(self, name):
 self.name = name

 def __enter__(self):
 self.start_time = time.perf_counter()

 def __exit__(self, type, value, traceback):
 end_time = time.perf_counter()
 print(f"'{self.name}' taken:", end_time - self.start_time, "seconds.")

os.environ['AV_PYTHON_AVISYNTH'] = 'C:/ffmpeg/bin'

socket_enabled = False
sock = None
if socket_enabled:
 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
 print("Connecting to server...")
 sock.connect(('127.0.0.1', 8000))

# Set up video capture.
print("Opening web cam...")
cap = cv2.VideoCapture(0, cv2.CAP_DSHOW)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 800)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 600)

# Initialize the encoder.
encoder = av.CodecContext.create('h264', 'w')
encoder.width = 800
encoder.height = 600
encoder.pix_fmt = 'yuv420p'
encoder.bit_rate = 5000

# Initialize the decoder.
decoder = av.CodecContext.create('h264', 'r')
decoder.width = 800
decoder.height = 600
decoder.pix_fmt = 'yuv420p'
decoder.bit_rate = 5000

print("Streaming...")
while(cap.isOpened()):
 
 # Capture the frame from the camera.
 ret, orig_frame = cap.read()

 cv2.imshow('Source Video', orig_frame)

 # Convert to YUV.
 img_yuv = cv2.cvtColor(orig_frame, cv2.COLOR_BGR2YUV_I420)

 # Create a video frame object from the num py array.
 video_frame = av.VideoFrame.from_ndarray(img_yuv, format='yuv420p')

 with PerfTimer("Encoding") as p:
 encoded_frames = encoder.encode(video_frame)

 # Sometimes the encode results in no frames encoded, so lets skip the frame.
 if len(encoded_frames) == 0:
 continue

 print(f"Decoding {len(encoded_frames)} frames...")

 for frame in encoded_frames:
 encoded_frame_bytes = bytes(frame)

 if socket_enabled:
 # Get the size of the encoded frame in bytes
 size = struct.pack('code>