
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (56)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (8880)
-
Javacv : Mat data becomes null after using methods
5 avril 2017, par rarroubaI’m working on an android application for object detection and counting. For the image processing I am using JavaCV (Java wrapper for OpenCV and FFmpeg). After importing the library, I’m able to successfully use the FFmpegFrameGrabber to get the frames of a video.
My problem : After I convert the Frame to a Mat object and perform some operation on that Mat object the data becomes null.
Code :
MainActivity
public class MainActivity extends AppCompatActivity {
OpenCVFrameConverter.ToMat converterToMat = new OpenCVFrameConverter.ToMat();
private CountModule countModule;
FFmpegFrameGrabber retriever;
ArrayList frames;
boolean frameloaded = false;
File folder = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_MOVIES);
File video = new File(folder, "test.mp4");
AndroidFrameConverter converterToBitmap = new AndroidFrameConverter();
private static WebStreamer webStreamer;
static {
System.loadLibrary("native-lib");
}
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
final Button button = (Button) findViewById(R.id.btnLdFrames);
final ImageView img = (ImageView) findViewById(R.id.imageView);
button.setOnClickListener(new View.OnClickListener() {
public void onClick(View v) {
((TextView) button).setText("Loading");
new Thread(new Runnable() {
public void run() {
try {
button.setClickable(false);
button.setAlpha((float) 0.3);
LoadFrames();
button.setAlpha((float) 1);
} catch (FrameGrabber.Exception e) {
e.printStackTrace();
}
}
}).start();
}
});
}
private void LoadFrames() throws FrameGrabber.Exception {
if (!frameloaded){
frameloaded = true;
retriever = new FFmpegFrameGrabber(video);
frames = new ArrayList<>();
Log.d("Frame",": Start of loop");
retriever.start();
final ImageView img = (ImageView) findViewById(R.id.imageView);
for (int i=0;i<50;i++){//155430
retriever.setFrameNumber(i*100);
Frame temp = new Frame();
temp = retriever.grab();
frames.add(converterToMat.convert(temp));
Log.d("Frame",": " + i*100);
}
retriever.stop();
countModule = new CountModule(frames);
Log.d("Frame","CountModule instantiated");
}
}
}Constructor of Countmodule
public CountModule(ArrayList<mat> frames){
fgGBG = new Mat(frames.get(0).rows(),frames.get(0).cols(),frames.get(0).type());
gbg = createBackgroundSubtractorMOG2();
Mat maTemp = new Mat(frames.get(0).rows(),frames.get(0).cols(),frames.get(0).type());
median = new Mat(frames.get(0).rows(),frames.get(0).cols(),frames.get(0).type());
frames.get(0).copyTo(median);
;
median = getMedian(frames);
kernel2 = Mat.ones(11,11,CV_8U).asMat();
kernel = Mat.ones(3,1,CV_8U).asMat();
gbg.apply(median,fgGBG,0.001);
}
</mat>Variables (images) :
After convert from Frame to Mat. Data has values.
As you can see everytime I use a OpenCV specific method, the returned Mat is not what is expected.
-
Error decoding a simple audio file using FFmpeg library
29 mars 2017, par satyresAfter successfuly compiling the latest version of FFmpeg library and generated .a library in Ubuntu I’ve been struggling now for more than a week to decode and play a simple mp3 file in Android without a success !
I’ve followed this tutorial given by FFmpeg team in Github i’ve tried to use it in Android but no luck !
here is the Native code.void Java_com_example_home_hellondk_MainActivity_audio_1decode_1example(JNIEnv * env, jobject obj, jstring file, jbyteArray array) {
jboolean isfilenameCopy;
const char * filename = ( * env) - > GetStringUTFChars(env, file, &
isfilenameCopy);
jclass cls = ( * env) - > GetObjectClass(env, obj);
jmethodID play = ( * env) - > GetMethodID(env, cls, "playSound", "([BI)V");
AVCodec * codec;
AVCodecContext * c = NULL;
int len;
FILE * f, * outfile;
uint8_t inbuf[AUDIO_INBUF_SIZE + AV_INPUT_BUFFER_PADDING_SIZE];
AVPacket avpkt;
AVFrame * decoded_frame = NULL;
av_init_packet( & avpkt);
printf("Decode audio file %s \n", filename);
LOGE("Decode audio file %s\n", filename);
/* find the MPEG audio decoder */
codec = avcodec_find_decoder(AV_CODEC_ID_MP3);
if (!codec) {
fprintf(stderr, "Codec not found\n");
LOGE("Codec not found\n");
exit(1);
}
c = avcodec_alloc_context3(codec);
if (!c) {
fprintf(stderr, "Could not allocate audio codec context\n");
LOGE("Could not allocate audio codec context\n");
exit(1);
}
/* open it */
if (avcodec_open2(c, codec, NULL) < 0) {
fprintf(stderr, "Could not open codec\n");
LOGE("Could not open codec\n");
exit(1);
}
f = fopen(filename, "rb");
if (!f) {
fprintf(stderr, "Could not open %s\n", filename);
LOGE("Could not open %s\n", filename);
exit(1);
}
/* decode until eof */
avpkt.data = inbuf;
avpkt.size = fread(inbuf, 1, AUDIO_INBUF_SIZE, f);
while (avpkt.size > 0) {
int i, ch;
int got_frame = 0;
if (!decoded_frame) {
if (!(decoded_frame = av_frame_alloc())) {
fprintf(stderr, "Could not allocate audio frame\n");
LOGE("Could not allocate audio frame\n");
exit(1);
}
}
len = avcodec_decode_audio4(c, decoded_frame, & got_frame, & avpkt);
if (len < 0) {
fprintf(stderr, "Error while decoding\n");
LOGE("Error while decoding\n");
exit(1);
}
if (got_frame) {
/* if a frame has been decoded, output it */
int data_size = av_get_bytes_per_sample(c - > sample_fmt);
if (data_size < 0) {
/* This should not occur, checking just for paranoia */
fprintf(stderr, "Failed to calculate data size\n");
LOGE("Failed to calculate data size\n");
exit(1);
}
if (data_size > 0) {
jbyte * bytes = ( * env) - > GetByteArrayElements(env, array, NULL);
memcpy(bytes, decoded_frame, got_frame); //
( * env) - > ReleaseByteArrayElements(env, array, bytes, 0);
( * env) - > CallVoidMethod(env, obj, play, array, got_frame);
LOGE("DECODING ERROR5");
}
}
avpkt.size -= len;
avpkt.data += len;
avpkt.dts =
avpkt.pts = AV_NOPTS_VALUE;
if (avpkt.size < AUDIO_REFILL_THRESH) {
/* Refill the input buffer, to avoid trying to decode
* incomplete frames. Instead of this, one could also use
* a parser, or use a proper container format through
* libavformat. */
memmove(inbuf, avpkt.data, avpkt.size);
avpkt.data = inbuf;
len = fread(avpkt.data + avpkt.size, 1,
AUDIO_INBUF_SIZE - avpkt.size, f);
if (len > 0)
avpkt.size += len;
}
}
fclose(f);
avcodec_free_context( & c);
av_frame_free( & decoded_frame);
}The Java code :
package com.example.home.hellondk;
import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioTrack;
import android.media.MediaPlayer;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.util.Log;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
public class MainActivity extends AppCompatActivity {
static {
System.loadLibrary("MyLibraryPlayer");
}
public native void createEngine();
public native void audio_decode_example(String outfilename, byte[] array);
private AudioTrack track;
private FileOutputStream os;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
createEngine();
/* MediaPlayer mp = new MediaPlayer();
mp.start();*/
int bufSize = AudioTrack.getMinBufferSize(32000,
AudioFormat.CHANNEL_CONFIGURATION_STEREO,
AudioFormat.ENCODING_PCM_16BIT);
track = new AudioTrack(AudioManager.STREAM_MUSIC,
32000,
AudioFormat.CHANNEL_CONFIGURATION_STEREO,
AudioFormat.ENCODING_PCM_16BIT,
bufSize,
AudioTrack.MODE_STREAM);
byte[] bytes = new byte[bufSize];
try {
os = new FileOutputStream("/storage/emulated/0/Cloud Radio/a.out", false);
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
audio_decode_example("/storage/emulated/0/Cloud Radio/test.mp3", bytes);
}
void playSound(byte[] buf, int size) {
//android.util.Log.v("ROHAUPT", "RAH Playing");
if (track.getPlayState() != AudioTrack.PLAYSTATE_PLAYING)
track.play();
track.write(buf, 0, size);
try {
os.write(buf, 0, size);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}I always got this error : Error while decoding .
i’ve tried to change the decoder "AV_CODEC_ID_MP3" no sucess !
Thank you so much for your help.
Kind regards -
QTableWidget and QProcess - update table based on multiple process results
9 mars 2017, par SpencerI have a python program that runs through a QTableWidget and for each item it runs a QProcess (an FFMPEG process to be exact). What I’m trying to do is update the "parent" cell when the process completes. Right now there is a for loop that goes through each row and launches a process for each, and connects the finished signal of that process to a "finished" function, which updates the QTableWidget cell. I’m just having trouble properly telling the function WHICH sell to update - right now I am passing it the index of the current row (seeing as it is being spawned by the for loop) but what happens is by the time the processes start to finish it will only get the last row in the table... I’m quite new to Python and PyQt so it is possible there is some fundamental thing I have wrong here !
I tried passing the actual QTabelWidgetItem instead of the index but I got this error : "RuntimeError : wrapped C/C++ object of type QTableWidgetItem has been deleted"
My code, the function "finished" and line #132 are the relevant ones :
import sys, os, re
from PyQt4 import QtGui, QtCore
class BatchTable(QtGui.QTableWidget):
def __init__(self, parent):
super(BatchTable, self).__init__(parent)
self.setAcceptDrops(True)
self.setColumnCount(4)
self.setColumnWidth(1,50)
self.hideColumn(3)
self.horizontalHeader().setStretchLastSection(True)
self.setHorizontalHeaderLabels(QtCore.QString("Status;Alpha;File;Full Path").split(";"))
self.doubleClicked.connect(self.removeProject)
def removeProject(self, myItem):
row = myItem.row()
self.removeRow(row)
def dragEnterEvent(self, e):
if e.mimeData().hasFormat('text/uri-list'):
e.accept()
else:
print "nope"
e.ignore()
def dragMoveEvent(self, e):
e.accept()
def dropEvent(self, e):
if e.mimeData().hasUrls:
for url in e.mimeData().urls():
chkBoxItem = QtGui.QTableWidgetItem()
chkBoxItem.setFlags(QtCore.Qt.ItemIsUserCheckable | QtCore.Qt.ItemIsEnabled)
chkBoxItem.setCheckState(QtCore.Qt.Unchecked)
rowPosition = self.rowCount()
self.insertRow(rowPosition)
self.setItem(rowPosition, 0, QtGui.QTableWidgetItem("Ready"))
self.setItem(rowPosition, 1, chkBoxItem)
self.setItem(rowPosition, 2, QtGui.QTableWidgetItem(os.path.split(str(url.toLocalFile()))[1]))
self.setItem(rowPosition, 3, QtGui.QTableWidgetItem(url.toLocalFile()))
self.item(rowPosition, 0).setBackgroundColor(QtGui.QColor(80, 180, 30))
class ffmpegBatch(QtGui.QWidget):
def __init__(self):
super(ffmpegBatch, self).__init__()
self.initUI()
def initUI(self):
self.edit = QtGui.QTextEdit()
cmdGroup = QtGui.QGroupBox("Commandline arguments")
fpsLbl = QtGui.QLabel("FPS:")
self.fpsCombo = QtGui.QComboBox()
self.fpsCombo.addItem("29.97")
self.fpsCombo.addItem("23.976")
hbox1 = QtGui.QHBoxLayout()
hbox1.addWidget(fpsLbl)
hbox1.addWidget(self.fpsCombo)
cmdGroup.setLayout(hbox1)
saveGroup = QtGui.QGroupBox("Output")
self.outputLocation = QtGui.QLineEdit()
self.browseBtn = QtGui.QPushButton("Browse")
saveLocationBox = QtGui.QHBoxLayout()
# Todo: add "auto-step up two folders" button
saveLocationBox.addWidget(self.outputLocation)
saveLocationBox.addWidget(self.browseBtn)
saveGroup.setLayout(saveLocationBox)
runBtn = QtGui.QPushButton("Run Batch Transcode")
mainBox = QtGui.QVBoxLayout()
self.table = BatchTable(self)
# TODO: add "copy from clipboard" feature
mainBox.addWidget(self.table)
mainBox.addWidget(cmdGroup)
mainBox.addWidget(saveGroup)
mainBox.addWidget(runBtn)
mainBox.addWidget(self.edit)
self.setLayout(mainBox)
self.setGeometry(300, 300, 600, 500)
self.setWindowTitle('FFMPEG Batch Converter')
# triggers/events
runBtn.clicked.connect(self.run)
def RepresentsInt(self, s):
try:
int(s)
return True
except ValueError:
return False
def run(self):
if (self.outputLocation.text() == ''):
return
for projIndex in range(self.table.rowCount()):
# collect some data
ffmpeg_app = "C:\\Program Files\\ffmpeg-20150702-git-03b2b40-win64-static\\bin\\ffmpeg"
frameRate = self.fpsCombo.currentText()
inputFile = self.table.model().index(projIndex,3).data().toString()
outputPath = self.outputLocation.text()
outputPath = outputPath.replace("/", "\\")
# format the input for ffmpeg
# find how the exact number range, stored as 'd'
imageName = os.path.split(str(inputFile))[1]
imageName, imageExt = os.path.splitext(imageName)
length = len(imageName)
d = 0
while (self.RepresentsInt(imageName[length-2:length-1]) == True):
length = length-1
d = d+1
inputPath = os.path.split(str(inputFile))[0]
inputFile = imageName[0:length-1]
inputFile = inputPath + "/" + inputFile + "%" + str(d+1) + "d" + imageExt
inputFile = inputFile.replace("/", "\\")
# format the output
outputFile = outputPath + "\\" + imageName[0:length-2] + ".mov"
# build the commandline
cmd = '"' + ffmpeg_app + '"' + ' -y -r ' + frameRate + ' -i ' + '"' + inputFile + '"' + ' -vcodec dnxhd -b:v 145M -vf colormatrix=bt601:bt709 -flags +ildct ' + '"' + outputFile + '"'
# launch the process
proc = QtCore.QProcess(self)
proc.finished.connect(lambda: self.finished(projIndex))
proc.setProcessChannelMode(proc.MergedChannels)
proc.start(cmd)
proc.readyReadStandardOutput.connect(lambda: self.readStdOutput(proc, projIndex, 100))
self.table.setItem(projIndex, 0, QtGui.QTableWidgetItem("Running..."))
self.table.item(projIndex, 0).setBackgroundColor(QtGui.QColor(110, 145, 30))
def readStdOutput(self, proc, projIndex, total):
currentLine = QtCore.QString(proc.readAllStandardOutput())
currentLine = str(currentLine)
frameEnd = currentLine.find("fps", 0, 15)
if frameEnd != -1:
m = re.search("\d", currentLine)
if m:
frame = currentLine[m.start():frameEnd]
percent = (float(frame)/total)*100
print "Percent: " + str(percent)
self.edit.append(str(percent))
self.table.setItem(projIndex, 0, QtGui.QTableWidgetItem("Encoded: " + str(percent) + "%"))
def finished(self, projIndex):
# TODO: This isn't totally working properly for multiple processes (seems to get confused)
print "A process completed"
print self.sender().readAllStandardOutput()
if self.sender().exitStatus() == 0:
self.table.setItem(projIndex, 0, QtGui.QTableWidgetItem("Encoded"))
self.table.item(projIndex, 0).setBackgroundColor(QtGui.QColor(45, 145, 240))
def main():
app = QtGui.QApplication(sys.argv)
ex = ffmpegBatch()
ex.show()
sys.exit(app.exec_())
if __name__ == '__main__':
main()(And yes I do know that my percentage update is totally wrong right now, still working on that...)