
Recherche avancée
Autres articles (62)
-
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...) -
Activation de l’inscription des visiteurs
12 avril 2011, parIl est également possible d’activer l’inscription des visiteurs ce qui permettra à tout un chacun d’ouvrir soit même un compte sur le canal en question dans le cadre de projets ouverts par exemple.
Pour ce faire, il suffit d’aller dans l’espace de configuration du site en choisissant le sous menus "Gestion des utilisateurs". Le premier formulaire visible correspond à cette fonctionnalité.
Par défaut, MediaSPIP a créé lors de son initialisation un élément de menu dans le menu du haut de la page menant (...)
Sur d’autres sites (10137)
-
Render Multiple Gifs with ffplay/ffmpeg in Winform
17 juin 2019, par PurqsI’m trying to get x number of animated gifs to render on like a Panel or PictureBox and using transparency that is in each gif. I’ve tried a couple approaches but am not super famiular with ffmpeg and such. Below is some code that I use to get it to render inside a panel, but I can’t figure out how to get like 5 gifs to stack/layer on one another and still render as you would expect.
I need/want this to render in the form and not outputted. I am a little confused to why the ffplay.exe doesn’t use the -i command and that might be why i can’t get it to render. any ideas ?
Working example below.
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.Diagnostics;
using System.Threading;
using System.IO;
using System.Reflection;
using System.Runtime.InteropServices;
using System.Drawing.Text;
using System.Text.RegularExpressions;
using System.Configuration;
using Microsoft.Win32;
using System.Windows.Forms.VisualStyles;
//FOR THIS EXAMPLE CREATE FORM HAVE BUTTON ON IT AND PANEL.
//button: button's click is "button1_Click"
//panel: Needed to output the render on it.
//FILES:
//Test.Gif
//These ff files came from the ffmpeg offical site.
//ffplay.exe //currently using
//ffmpeg.exe //thinking i need to use to get it how I want.
//I most of the code below from https://stackoverflow.com/questions/31465630/ffplay-successfully-moved-inside-my-winform-how-to-set-it-borderless which was a good starting point.
namespace Test_Form
{
public partial class Form1 : Form
{
[DllImport("user32.dll", SetLastError = true)]
private static extern bool MoveWindow(IntPtr hWnd, int X, int Y, int nWidth, int nHeight, bool bRepaint);
[DllImport("user32.dll")]
private static extern IntPtr SetParent(IntPtr hWndChild, IntPtr hWndNewParent);
//Process ffplay = null;
public Form1()
{
InitializeComponent();
Application.EnableVisualStyles();
this.DoubleBuffered = true;
}
public Process ffplay = new Process();
private void FFplay()
{
ffplay.StartInfo.FileName = "ffplay.exe";
ffplay.StartInfo.Arguments = "-noborder Test.gif"; //THIS IS WHERE I INPUT THE GIF FILE
ffplay.StartInfo.CreateNoWindow = true;
ffplay.StartInfo.RedirectStandardOutput = true;
ffplay.StartInfo.UseShellExecute = false;
ffplay.EnableRaisingEvents = true;
ffplay.OutputDataReceived += (o, e) => Debug.WriteLine(e.Data ?? "NULL", "ffplay");
ffplay.ErrorDataReceived += (o, e) => Debug.WriteLine(e.Data ?? "NULL", "ffplay");
ffplay.Exited += (o, e) => Debug.WriteLine("Exited", "ffplay");
ffplay.Start();
Thread.Sleep(1000); // you need to wait/check the process started, then...
// child, new parent
// make 'this' the parent of ffmpeg (presuming you are in scope of a Form or Control)
SetParent(ffplay.MainWindowHandle, this.Handle);
// window, x, y, width, height, repaint
// move the ffplayer window to the top-left corner and set the size to 320x280
MoveWindow(ffplay.MainWindowHandle, 800, 600, 320, 280, true);
SetParent(ffplay.MainWindowHandle, this.panel1.Handle);
MoveWindow(ffplay.MainWindowHandle, -5, -30, 320, 280, true);
}
//runs the FFplay Command
private void button1_Click(object sender, EventArgs e)
{
FFplay();
}
private void Form1_FormClosed(object sender, FormClosedEventArgs e)
{
try { ffplay.Kill(); }
catch { }
}
}I would like the button to allow me to add any number of gifs (like 5 or 10) all to the same area and have them being animated with their transparent showing what is under that gif.
So for example I could have a circle image, then a spinning/loading transparent gif on top, and then a gif that counts up/down on top of that one to give me the effect of a count-down.
Thanks for all the help !
-
How to programmatically read an audio RTP stream using javacv and ffmpeg ?
21 mai 2019, par ChrisI am trying to read an audio RTP stream coming from ffmpeg in command line using javaCV. I create a DatagramSocket that listens to a specified port but can’t get the audio frames.
I have tried with different types of buffer to play the audio to my speakers but I am getting a lot of "Invalid return value 0 for stream protocol" error messages with no audio in my speakers.
I am running the following command to stream an audio file :
ffmpeg -re -i /some/file.wav -ar 44100 -f mulaw -f rtp rtp ://127.0.0.1:7780
And an excerpt of my code so far :
public class FrameGrabber implements Runnable
private static final TimeUnit SECONDS = TimeUnit.SECONDS;
private InetAddress ipAddress;
private DatagramSocket serverSocket;
public FrameGrabber(Integer port) throws UnknownHostException, SocketException {
super();
this.ipAddress = InetAddress.getByName("192.168.44.18");
serverSocket = new DatagramSocket(port, ipAddress);
}
public AudioFormat getAudioFormat() {
float sampleRate = 44100.0F;
// 8000,11025,16000,22050,44100
int sampleSizeInBits = 16;
// 8,16
int channels = 1;
// 1,2
boolean signed = true;
// true,false
boolean bigEndian = false;
// true,false
return new AudioFormat(sampleRate, sampleSizeInBits, channels, signed, bigEndian);
}
@Override
public void run() {
byte[] buffer = new byte[2048];
DatagramPacket packet = new DatagramPacket(buffer, buffer.length);
DataInputStream dis = new DataInputStream(new ByteArrayInputStream(packet.getData(), packet.getOffset(), packet.getLength()));
FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(dis);
grabber.setFormat("mulaw");
grabber.setSampleRate((int) getAudioFormat().getSampleRate());
grabber.setAudioChannels(getAudioFormat().getChannels());
SourceDataLine soundLine = null;
try {
grabber.start();
if (grabber.getSampleRate() > 0 && grabber.getAudioChannels() > 0) {
AudioFormat audioFormat = new AudioFormat(grabber.getSampleRate(), 16, grabber.getAudioChannels(), true, true);
DataLine.Info info = new DataLine.Info(SourceDataLine.class, audioFormat);
soundLine = (SourceDataLine) AudioSystem.getLine(info);
soundLine.open(audioFormat);
soundLine.start();
}
ExecutorService executor = Executors.newSingleThreadExecutor();
while (true) {
try {
serverSocket.receive(packet);
} catch (IOException e) {
e.printStackTrace();
}
Frame frame = grabber.grab();
//if (frame == null) break;
if (frame != null && frame.samples != null) {
ShortBuffer channelSamplesFloatBuffer = (ShortBuffer) frame.samples[0];
channelSamplesFloatBuffer.rewind();
ByteBuffer outBuffer = ByteBuffer.allocate(channelSamplesFloatBuffer.capacity() * 2);
float[] samples = new float[channelSamplesFloatBuffer.capacity()];
for (int i = 0; i < channelSamplesFloatBuffer.capacity(); i++) {
short val = channelSamplesFloatBuffer.get(i);
outBuffer.putShort(val);
}
if (soundLine == null) return;
try {
SourceDataLine finalSoundLine = soundLine;
executor.submit(() -> {
finalSoundLine.write(outBuffer.array(), 0, outBuffer.capacity());
outBuffer.clear();
}).get();
} catch (InterruptedException interruptedException) {
Thread.currentThread().interrupt();
}
}
}
/*
executor.shutdownNow();
executor.awaitTermination(1, SECONDS);
if (soundLine != null) {
soundLine.stop();
}
grabber.stop();
grabber.release();*/
} catch (ExecutionException ex) {
System.out.println("ExecutionException");
ex.printStackTrace();
} catch (org.bytedeco.javacv.FrameGrabber.Exception ex) {
System.out.println("FrameGrabberException");
ex.printStackTrace();
} catch (LineUnavailableException ex) {
System.out.println("LineUnavailableException");
ex.printStackTrace();
}/* catch (InterruptedException e) {
System.out.println("InterruptedException");
e.printStackTrace();
}*/
}
public static void main(String[] args) throws SocketException, UnknownHostException {
Runnable apRunnable = new FrameGrabber(7780);
Thread ap = new Thread(apRunnable);
ap.start();
}At this stage, I am trying to play the audio file in my speakers but I am getting the following logs :
Task :FrameGrabber.main()
Invalid return value 0 for stream protocol
Invalid return value 0 for stream protocol
Input #0, mulaw, from ’java.io.DataInputStream@474e6cea’ :
Duration : N/A, bitrate : 352 kb/s
Stream #0:0 : Audio : pcm_mulaw, 44100 Hz, 1 channels, s16, 352 kb/s
Invalid return value 0 for stream protocol
Invalid return value 0 for stream protocol
Invalid return value 0 for stream protocol
Invalid return value 0 for stream protocol
...What am I doing wrong ?
Thanks in advance !
-
Most efficient way to render bitmap to screen on Linux [on hold]
22 juillet 2019, par MaximusMy goal is to receive a video feed from wifi and display it on my screen. For this, I’ve created a couple of small programs and a bash script to automate them running. It works like this :
UDPBitmap/Plotter & ffplay -i - < UDPBitmap/pipe & python requester.py;
Translation : There is a C++ program called Plotter, its job is to receive packets on an assigned UDP port, process them and write it to pipe (named : UDPBitmap/pipe). The pipe is accessed by ffplay, and ffplay renders the video on screen. The python file is solely responsible for accessing and controlling the camera with various HTTP requests.
The above command works fine, everything works as expected. However, the resulting latency and framerate is a bit lower than what I’ve wanted. The bottleneck of this program is not the pipe, it is fast enough. Wifi transmission is also fast enough. The only thing left is ffplay.
Question :
What is the most efficient way to render a bitmap to screen, on Linux ? Is there a de facto library for this that I can use ?
Note :
- Language/framework/library does not matter (C, C++, Java, Python, native linux tools and so on...)
- I do not need a window handle, but is SDL+OpenGL the way to go ?
- Writing directly to the framebuffer would be super cool...