
Recherche avancée
Autres articles (55)
-
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras. -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Le plugin : Gestion de la mutualisation
2 mars 2010, parLe plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
Installation basique
On installe les fichiers de SPIP sur le serveur.
On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
< ?php (...)
Sur d’autres sites (3307)
-
Encoder (codec png) not found for output stream #0:0 [duplicate]
7 juin 2016, par Anubhav DhawanThis question already has an answer here :
-
ffmpeg & png watermark issue
2 answers
I’m trying to create a NodeJS app that converts a video into a GIF image.
I’m using node-gify plugin for this purpose, which uses FFmpeg and GraphicsMagick.
Here’s my sample code :
var gify = require('./');
var http = require('http');
var fs = require('fs');
var opts = {
height: 300,
rate: 10
};
console.time('convert');
gify('out.mp4', 'out.gif', opts, function(err) {
if (err) throw err;
console.timeEnd('convert');
var s = fs.statSync('out.gif');
console.log('size: %smb', s.size / 1024 / 1024 | 0);
});And here’s my console error :
> gify@0.2.0 start /home/daffodil/repos/node-gify-master
> node example.js
/home/daffodil/repos/node-gify-master/example.js:24
if (err) throw err;
^
Error: Command failed: /bin/sh -c ffmpeg -i out.mp4 -filter:v scale=-1:300 -r 10 /tmp/IP5OXJZELd/%04d.png
ffmpeg version 3.0.2 Copyright (c) 2000-2016 the FFmpeg developers
built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04)
configuration: --disable-yasm
libavutil 55. 17.103 / 55. 17.103
libavcodec 57. 24.102 / 57. 24.102
libavformat 57. 25.100 / 57. 25.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 31.100 / 6. 31.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'out.mp4':
Metadata:
major_brand : mp42
minor_version : 1
compatible_brands: mp42mp41
creation_time : 2005-02-25 02:35:57
Duration: 00:01:10.00, start: 0.000000, bitrate: 106 kb/s
Stream #0:0(eng): Audio: aac (LC) (mp4a / 0x6134706D), 8000 Hz, stereo, fltp, 19 kb/s (default)
Metadata:
creation_time : 2005-02-25 02:35:57
handler_name : Apple Sound Media Handler
Stream #0:1(eng): Video: mpeg4 (Advanced Simple Profile) (mp4v / 0x7634706D), yuv420p, 192x242 [SAR 1:1 DAR 96:121], 76 kb/s, 15 fps, 15 tbr, 600 tbn, 1k tbc (default)
Metadata:
creation_time : 2005-02-25 02:35:57
handler_name : Apple Video Media Handler
Stream #0:2(eng): Data: none (rtp / 0x20707472), 4 kb/s (default)
Metadata:
creation_time : 2005-02-25 02:35:57
handler_name : hint media handler
Stream #0:3(eng): Data: none (rtp / 0x20707472), 3 kb/s (default)
Metadata:
creation_time : 2005-02-25 02:35:57
handler_name : hint media handler
Output #0, image2, to '/tmp/IP5OXJZELd/%04d.png':
Metadata:
major_brand : mp42
minor_version : 1
compatible_brands: mp42mp41
Stream #0:0(eng): Video: png, none, q=2-31, 128 kb/s (default)
Metadata:
creation_time : 2005-02-25 02:35:57
handler_name : Apple Video Media Handler
Stream mapping:
Stream #0:1 -> #0:0 (mpeg4 (native) -> ? (?))
Encoder (codec png) not found for output stream #0:0
at ChildProcess.exithandler (child_process.js:213:12)
at emitTwo (events.js:100:13)
at ChildProcess.emit (events.js:185:7)
at maybeClose (internal/child_process.js:827:16)
at Socket.<anonymous> (internal/child_process.js:319:11)
at emitOne (events.js:90:13)
at Socket.emit (events.js:182:7)
at Pipe._onclose (net.js:471:12)
</anonymous>PS : I had a couple of problems installing FFmpeg on my Ubuntu 14.04.
- First, FFmpeg is removed from Ubuntu 14.04 (legal issues AFAIK). But I managed to
apt-get
it through this. - Second, when I tried to
./configure
(as mentioned in its README.md), I got this error -yasm/nasm not found or too old. Use --disable-yasm for a crippled build.
. So I used./configure --disable-yasm
instead, and it (somehow) worked.
Update #1
After read this log a couple of times, I managed to produce a sample GIF from my mp4 file, by changing the command, which
example.js
tries to run :From
ffmpeg -i out.mp4 -filter:v scale=-1:300 -r 10 /tmp/Lz43nx6wv1/%04d.png
To
ffmpeg -i out.mp4 -filter:v scale=-1:300 -r 10 out.gif
But it’s still using command line, I need to do this by code.
So I dived into the code and found that this wrong url is coming from the plugin’s index.js :
...
// tmpfile(s)
var id = uid(10);
var dir = path.resolve('/tmp/' + id);
var tmp = path.join(dir, '/%04d.png');
...Is this an issue with the plugin, or am I doing something wrong here ?
In any case, please put the correct stub here, because I don’t want to touch this part unless I know what I’m doing ?Update #2
Now I installed zlib1g-dev, and then reinstalled both FFmpeg and graphicsMagick, and now I see this error :
gm convert: No decode delegate for this image format (/tmp/ZQbEAynAcf/0702.png).
Thanks in advance :)
-
ffmpeg & png watermark issue
-
ffmpeg library m4a moov atom not found when using custom IOContext
26 avril 2017, par trigger_deathI’m currently trying to implement FFmpeg into SFML so I have a wider range of audio files to read from but I get the error
[mov,mp4,m4a,3gp,3g2,mj2 @ #] moov atom not found
when opening an m4a file. Now this only happens when I use a custom IOContext to read the file instead of opening it from URL. This page here says I’m not supposed to use streams to open m4a files but is an IOContext considered a stream ? Because I have no way to open it as a URL as that’s how SFML works.// Explanation of InputStream class
class InputStream {
int64_t getSize()
int64_t read(void* data, int64_t size);
int64_t seek(int64_t position);
int64_t tell(); // Gets the stream position
};
// Used for IOContext
int read(void* opaque, uint8_t* buf, int buf_size) {
sf::InputStream* stream = (sf::InputStream*)opaque;
return (int)stream->read(buf, buf_size);
}
// Used for IOContext
int64_t seek(void* opaque, int64_t offset, int whence) {
sf::InputStream* stream = (sf::InputStream*)opaque;
switch (whence) {
case SEEK_SET:
break;
case SEEK_CUR:
offset += stream->tell();
break;
case SEEK_END:
offset = stream->getSize() - offset;
}
return (int64_t)stream->seek(offset);
}
bool open(sf::InputStream& stream) {
AVFormatContext* m_formatContext = NULL;
AVIOContext* m_ioContext = NULL;
uint8_t* m_ioContextBuffer = NULL;
size_t m_ioContextBufferSize = 0;
av_register_all();
avformat_network_init();
m_formatContext = avformat_alloc_context();
m_ioContextBuffer = (uint8_t*)av_malloc(m_ioContextBufferSize);
if (!m_ioContextBuffer) {
close();
return false;
}
m_ioContext = avio_alloc_context(
m_ioContextBuffer, m_ioContextBufferSize,
0, &stream, &::read, NULL, &::seek
);
if (!m_ioContext) {
close();
return false;
}
m_formatContext = avformat_alloc_context();
m_formatContext->pb = m_ioContext;
if (avformat_open_input(&m_formatContext, NULL, NULL, NULL) != 0) {
// FAILS HERE
close();
return false;
}
//...
return true;
} -
moov atom not found (Extracting unique faces from youtube video)
10 avril 2023, par TochukwuI got the error below


Saved 0 unique faces
[mov,mp4,m4a,3gp,3g2,mj2 @ 0000024f505224c0] moov atom not found



Trying to extract unique faces from a YouTube video with the code below which is designed to download the YouTube video and extract unique faces into a folder named faces. I got an empty video and folder. Please do check the Python code below


import os
import urllib.request
import cv2
import face_recognition
import numpy as np

# Step 1: Download the YouTube video
video_url = "https://www.youtube.com/watch?v=JriaiYZZhbY&t=4s"
urllib.request.urlretrieve(video_url, "video.mp4")

# Step 2: Extract frames from the video
cap = cv2.VideoCapture("video.mp4")
frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
frames = []
for i in range(frame_count):
 cap.set(cv2.CAP_PROP_POS_FRAMES, i)
 ret, frame = cap.read()
 if ret:
 frames.append(frame)
cap.release()

# Step 3: Detect faces in the frames
detected_faces = []
for i, frame in enumerate(frames):
 face_locations = face_recognition.face_locations(frame)
 for j, location in enumerate(face_locations):
 top, right, bottom, left = location
 face_image = frame[top:bottom, left:right]
 cv2.imwrite(f"detected_{i}_{j}.jpg", face_image)
 detected_faces.append(face_image)

# Step 4: Save the faces as separate images
if not os.path.exists("faces"):
 os.makedirs("faces")
known_faces = []
for i in range(len(detected_faces)):
 face_image = detected_faces[i]
 face_encoding = face_recognition.face_encodings(face_image)[0]
 known_faces.append(face_encoding)
 cv2.imwrite(f"faces/face_{i}.jpg", face_image)
print("Saved", len(known_faces), "unique faces")