人工神经网络 ANN BPN数字图像压缩 MATLAB源代码

作者:aaron8967    主页:http://aaron8967.blog.51cto.com

说明:

人工神经网络(Artificial Neural Network),是一种模拟人类神经系统对信息的记忆和处理方式的信息处理系统。人工神经网络是由十分简单的计算神经元组成的复杂系统,它具有非线性、自适应性、容错性和并行性等优点,应用范围十分广泛。本文中实现的是一种基于误差反向传播算法(Error Back Propagation)的神经网络,简称BP网络。

这个程序是本人在研究人工神经网络时,根据一篇论文所描述的方法编写的图像压缩程序,用于在MATLAB中测试图像压缩的成果。

现分享出来以供学习交流,也希望能够对那些不想直接用MATLAB的神经网络工具箱,而是自己编程实现人工神经网络的人有所帮助,源代码文件见下载链接。

                  

人工神经网络特性:

BP网络,由一个输入层、若干隐含层和一个输出层组成,其中隐含层数和各层单元数可以自己定义。图像压缩编码 即为中间隐含层的权值,压缩比位8:1

文件结构:

         主函数:ArtificialNeuralNetwork.m

         预处理函数:PreProcessing.m

功能:数据预处理,创建或写入日志文件和进行图像分块操作等,测试可以成功处理256*2561024*1024的图像。

         网络权值初始化函数:NetWeight.m

                                     功能:将网络的权值初始化为0~1之间的随机数

训练函数:TrainingFunction.m

                                     功能:用于训练人工神经网络,在训练模式时调用

         激活函数:ActivationFunction.m

                                     功能:神经元的激活函数

         计算函数:Calculating.m

功能:根据现有的神经网络权值,对图像进行压缩和还原计算,单次只能计算8*8的图像

后处理函数:PostProcessing.m

功能:对计算结果进行误差分析,写入结果数据和运行日志,并生成原图像和压缩还原后的图像,以供对比分析

源代码:

%% ArtificialNeuralNetwork.m%%

%程序简介

%Back Propagation for Digital Image Compression

%General Program of Artificial Neural Network

%Activation Funtction:      Sigmoid Function

%Neural Network Form:   Three Layers Hierarchical Structure

%Training Algorithm:        Error Back Propagation Algorithm

%Author:                          Aaron

%Date:                             2012.06.23

 

%格式设置

clc;

clear all;          %初始化变量空间

%参数设置

LayerNum=3;                     %定义网络层数

LayerUnitNum=[64 8 64];   %定义各层神经元数目

TrainingMode=1;               %设置训练模式,0关闭,1开启

TrainingNum=60;               %设置训练次数

UnitThreshold=0.05;           %设置神经元激活函数阈值

eta=0.5;                              %设置学习常数

%程序开始

%数据预处理

[PreDone,DirResult]=PreProcessing(LayerNum,LayerUnitNum,TrainingMode,TrainingNum,UnitThreshold,eta);

if PreDone==1

    %数据计算

    Output=Calculating(DirResult);

    %数据后处理

    PostProcessing( Output,DirResult );

end

%程序结束

 

 

%% PreProcessing.m%%

function[ PreDone,DirResult ]=PreProcessing(  LayerNum,LayerUnitNum,TrainingMode,TrainingNum,UnitThreshold,eta )

%UNTITLED Summary of this function goes here

%   Detailed explanation goes here 

PreDone=1;

%创建目录

DirCurr=cd;         %获取当前目录

TimeStart=fix(clock);    %获取当前时间

TimeStr=cell(1,6);

for i=1:6

    if TimeStart(i)<10

        TimeStr{i}=['0' num2str(TimeStart(i))];

    else

        TimeStr{i}=num2str(TimeStart(i));

    end

end

FolderResult=['Result' TimeStr{1} TimeStr{2} TimeStr{3} TimeStr{4} TimeStr{5} TimeStr{6}];

if exist(FolderResult,'dir')==0

    mkdir([DirCurr '\' FolderResult]);     %创建结果目录

end

DirResult=[DirCurr '\' FolderResult '\'];

%创建日志%

fid=fopen( [DirResult 'ResultInfo' '.' 'txt'] ,'at+');

fprintf(fid,'\r\n程序运行信息\r\n');          %开始写入

if TrainingMode==0

    fprintf(fid,'运行模式:计算\r\n');

else

    fprintf(fid,'运行模式:训练\r\n');

    fprintf(fid,'训练次数:%d次\r\n',TrainingNum);

end

fprintf(fid,'网络层数:');                       %网络层数

for i=1:LayerNum

    fprintf(fid,'%d层\t',i);

end

fprintf(fid,'\r\n');

fprintf(fid,'单元数目:');                       %单元数目

for i=1:LayerNum

    fprintf(fid,'%d\t',LayerUnitNum(i));

end

fprintf(fid,'\r\n');

fprintf(fid,'学习常数:%f\r\n',eta);

fprintf(fid,'激活函数阈值:%f\r\n',UnitThreshold);

fprintf(fid,'比特率  :0.625\r\n');

fprintf(fid,'压缩比  :8:1\r\n');

fprintf(fid,'开始时间:%4d.%2d.%2d%4d:%2d:%2d\r\n',TimeStart(1),TimeStart(2),TimeStart(3),TimeStart(4),TimeStart(5),TimeStart(6));

fprintf(fid,'程序预处理…………\r\n\r\n');

fclose(fid);

%载入数据%

FileName='test01';

FileType='bmp';

FileFolder='Data';

if ((exist([DirCurr '\' FileFolder],'dir') )&& (exist([DirCurr '\' FileFolder '\' FileName '.' FileType],'file')))==1

    [Pixel,Colormap]=imread([DirCurr '\' FileFolder '\' FileName '.' FileType]);      %载入训练输入数据

    save ( [DirResult 'Pixel.mat'],'Pixel');

    save ( [DirResult 'Colormap.mat'],'Colormap');

    imwrite(Pixel,Colormap,[DirResult 'pixelOrigin.bmp']);

%矩阵分块%

    [PixelX,PixelY]=size(Pixel);

    PixelNum=PixelX*PixelY;

    DimPixel=256;

    BlockX=8;

    BlockY=8;

    BigBlockX=256;

    BigBlockY=256;

    BlockNX=BigBlockX/BlockX;

    BlockNY=BigBlockY/BlockY;

    BlockNum=BlockNX*BlockNY;

    BigBlockNX=PixelX/BigBlockX;

    BigBlockNY=PixelY/BigBlockY;

    BigBlockNum=BigBlockNX*BigBlockNY;

    BigBlock=mat2cell(Pixel,ones(PixelX/BigBlockX,1)*BigBlockX,ones(PixelY/BigBlockY,1)*BigBlockY);

    Block=cell(BigBlockNX,BigBlockNY);

    for i=1:BigBlockNX

        for j=1:BigBlockNY

            Block{i,j}=mat2cell(BigBlock{i,j},ones(BigBlockX/BlockX,1)*BlockX,ones(BigBlockY/BlockY,1)*BlockY);

        end

    end

%输入输出%

    BInput=Block;

    BOutput=BInput;

    BCorrect=BInput;

    if ((rem(PixelX,256)~=0)||(rem(PixelY,256)~=0))

    PreDone=3;

    end

    if TrainingMode==1          %训练模式

        WFinal=cell(BigBlockNum,BlockNum);

    else                                   %计算模式

        if (exist([DirResult 'WFinal.mat' ],'file') )~=0;

            load ([DirResult 'WFinal.mat']);

        else

            PreDone=4;

        end

        WFinal=cell(BigBlockNum,BlockNum);

    end

else

    PreDone=2;

end

%保存变量%

save ( [DirResult 'ResutlPre.mat'],'BlockX' ,'BlockY' ,'BlockNX' ,'BlockNY' ,'BigBlockNX' ,'BigBlockNY' ,'PixelNum' ,'WFinal' ,'BInput', 'BOutput' ,'BCorrect' ,'DimPixel', 'TimeStart','TrainingNum','TrainingMode','LayerNum','LayerUnitNum','UnitThreshold','eta');

%日志信息%

fid=fopen( [DirResult 'ResultInfo' '.' 'txt'] ,'at+');

Time=fix(clock);

fprintf(fid,'时间:%4d:%2d:%2d\r\n',Time(4),Time(5),Time(6));

if PreDone==1

    fprintf(fid,'程序运行中…………\r\n\r\n');

elseif PreDone==2

    fprintf(fid,'目标路径或文件不存在,无法读取…………\r\n\r\n');

elseif PreDone==3

    fprintf(fid,'图像像素矩阵不符合要求:N*(256*256),终止运行…………\r\n\r\n');

elseif PreDone==4

    fprintf(fid,'权值文件不存在,无法读取…………\r\n\r\n');

else

   

end

fclose(fid);

end

 

%% NetWeight.m%%

function [ W ] = NetWeight( LayerNum, LayerUnitNum )

%UNTITLED Summary of this function goes here

%   Detailed explanation goes here

W=cell(1,LayerNum);

W{1}=ones(LayerUnitNum(LayerNum),LayerUnitNum(1));

for i=2:LayerNum

    W{i}=(rand(LayerUnitNum(i-1),LayerUnitNum(i)))/100; 

end

end

%函数结束%

 %% TrainingFunction.m%%

function [ WFinal ] = TrainingFunction( LayerNum,LayerUnitNum,W,Output,Correct,eta)

%UNTITLED Summary of this function goes here

%   Detailed explanation goes here

%初始变量%

delta=cell(1,LayerNum);                     %初始化误差变量

for i=1:LayerNum

    delta{i}=zeros(LayerUnitNum(i),1);

end

for i=1:LayerUnitNum(1)

    delta{1}(i)=0;

end

%更新误差%

for i=1:LayerUnitNum(LayerNum)     %调整输出层误差元胞数组

    d=Correct(i);

    o=Output{LayerNum}(i);

    delta{LayerNum}(i)=(d-o)*o*(1-o);

end

for j=(LayerNum-1):-1:2                    %调整隐含层误差元胞数组

    for i=1:LayerUnitNum(j)    

        o=Output{j}(i);

        delta{j}(i)=dot(delta{j+1},W{j+1}(i,:))*o*(1-o);

    end

end

%更新权重%

for j=LayerNum:-1:2                       %调整权值元胞数组

    W{j}=W{j}+eta*((delta{j})*(Output{j-1})')'; 

end

WFinal=W;

end

%函数结束%

 %% ActivationFunction.m%%

function [ Output ] = ActivationFunction( LayerNum,LayerUnitNum,W,Input,UnitThreshold )

%UNTITLED3 Summary of this function goes here

%   Detailed explanation goes here

%初始变量%

Output=cell(1,LayerNum);                                %初始化输出元胞数组

for j=1:LayerNum

    Output{j}=zeros(LayerUnitNum(j),1);             %初始化输出列向量

end

%激活函数%

Output{1}=Input;

for j=2:LayerNum

    for i=1:LayerUnitNum(j)

        temp=dot(W{j}(:,i),Output{(j-1)})-UnitThreshold;

        Output{j}(i)=1/(1+exp(-temp));

    end

end

end

%函数结束%

%% Calculating.m%%

function [ Output ] = Calculating( DirResult )

%UNTITLED2 Summary of this function goes here

%   Detailed explanation goes here

%载入变量%

load ([DirResult 'ResutlPre.mat']);

%变量设置%

Input=zeros((BlockX*BlockY),1);

Correct=zeros((BlockX*BlockY),1);

ErrorHistory=zeros(TrainingNum,1);

%分块计算%

    BigBN=1;

    for i=1:BigBlockNX

        for j=1:BigBlockNY

            BN=1;

            for m=1:BlockNX

                for n=1:BlockNY

                    lin=1;                              %输入块拉直

                    for lm=1:BlockX

                        for ln=1:BlockY

                            Input(lin)=double(BInput{i,j}{m,n}(lm,ln))/DimPixel;

                            Correct(lin)=double(BCorrect{i,j}{m,n}(lm,ln))/DimPixel;

                            lin=lin+1;

                        end

                    end

                    if TrainingMode==1           %训练模式

                        W=NetWeight(LayerNum,LayerUnitNum);

                        for t=1:TrainingNum

                            Output=ActivationFunction( LayerNum,LayerUnitNum,W,Input,UnitThreshold);

                            W=TrainingFunction( LayerNum,LayerUnitNum,W,Output,Correct,eta);

                            ErrorHistory(t)=ErrorHistory(t)+sum(sum(abs(Output{LayerNum}-Input)))/(BlockX*BlockY);

                        end

                        WFinal{BigBN,BN}=W;

                    end

                    Output=ActivationFunction( LayerNum,LayerUnitNum,W,Input,UnitThreshold);    %计算

                    lout=1;                            %输出块还原

                    for lm=1:BlockX

                        for ln=1:BlockY

                            BOutput{i,j}{m,n}(lm,ln)=uint8(Output{LayerNum}(lout)*DimPixel);

                            lout=lout+1;

                        end

                    end

                    BN=BN+1;

                end

            end

            BigBN=BigBN+1;

        end

    end

    ErrorHistory=ErrorHistory/(BlockNX*BlockNY*BigBlockNX*BigBlockNY);

    save ( [DirResult 'ErrorHistory.mat'],'ErrorHistory');

    save ( [DirResult 'WFinal.mat'],'WFinal');

    Output= BOutput;

end

 

%% PostProcessing.m%%

function [  ] = PostProcessing( Output,DirResult )

%UNTITLED Summary of this function goes here

%   Detailed explanation goes here

%载入变量%

load ([DirResult 'ResutlPre.mat']);

load ([DirResult 'ErrorHistory.mat']);

%日志信息%

fid=fopen( [DirResult 'ResultInfo' '.' 'txt'] ,'at+');

Time=fix(clock);

fprintf(fid,'时间:%4d:%2d:%2d\r\n',Time(4),Time(5),Time(6));

fprintf(fid,'程序后处理…………\r\n\r\n');

fclose(fid);

%矩阵组合%

BigBlock=cell(BigBlockNX,BigBlockNY);

Block=Output;

for i=1:BigBlockNX

    for j=1:BigBlockNY

        BigBlock{i,j}=cell2mat(Block{i,j});

    end

end

PixelResult=cell2mat(BigBlock);

save ([DirResult 'PixelResult.mat'],'PixelResult');

%结果输出%

load  ([DirResult 'Colormap.mat']);

imwrite(PixelResult,Colormap,[DirResult 'pixelResult.bmp']);

%图像输出%

figure=plot(ErrorHistory);

xlabel('训练次数');

ylabel('平均误差');

title('BPN for Image Compression');

axis([1,TrainingNum,0,1]);

saveas(figure,[DirResult 'FigureError.bmp']);

%误差计算%

load ([DirResult 'Pixel.mat']);

Error=(sum(sum(abs(double(PixelResult-Pixel)))))/(PixelNum*DimPixel);

SNR=10*log10((sum(sum(Pixel.^2)))/(sum(sum((PixelResult-Pixel).^2))));

PSNR=10*log10((PixelNum*(DimPixel^2))/(sum(sum((PixelResult-Pixel).^2))));

save ([DirResult 'Error.mat'],'Error');

%日志信息%

fid=fopen( [DirResult 'ResultInfo' '.' 'txt'] ,'at+');

TimeEnd=fix(clock);

fprintf(fid,'结束时间:%4d.%2d.%2d%4d:%2d:%2d\r\n',TimeEnd(1),TimeEnd(2),TimeEnd(3),TimeEnd(4),TimeEnd(5),TimeEnd(6));

TimeLast=TimeEnd-TimeStart;

for i=6:-1:4

    if TimeLast(i)<0

        if i>=5

            TimeLast(i)=TimeLast(i)+60;

            TimeLast(i-1)=TimeLast(i-1)-1;

        else

            TimeLast(i)=TimeLast(i)+24;

            TimeLast(i-1)=TimeLast(i-1)-1;

        end

    end

end

fprintf(fid,'\r\n运行时间:%2d时%2d分%2d秒\r\n',TimeLast(4),TimeLast(5),TimeLast(6));

fprintf(fid,'平均误差:%f\r\n',Error);

fprintf(fid,'信噪比  :%f\r\n',SNR);

fprintf(fid,'峰值信噪比 %f\r\n',PSNR);

fprintf(fid,'初始文件:PixelOrigin.bmp\r\n');

fprintf(fid,'还原文件:PixelResult.bmp\r\n');

fprintf(fid,'\r\n');

fclose(fid);

end

 

网络设置:

运行模式:训练

训练次数:60

网络层数:1      2 3 4

单元数目:64         16     16     64    

学习常数:0.500000

激活函数阈值:0.050000

比特率  0.625

压缩比  81

开始时间:2012. 6.25  22:46:31

程序预处理…………

 

时间:  22:46:31

程序运行中…………

 

时间:  22:49:34

程序后处理…………

 

结束时间:2012. 6.25  22:49:35

 

运行时间: 0 3 4

平均误差:0.001234

信噪比  19.039471

峰值信噪比 43.264512

初始文件:PixelOrigin.bmp

还原文件:PixelResult.bmp

 

压缩结果对比(图片来自网络):

 

原图像:

 

 

人工神经网络 (ANN) BPN数字图像压缩 MATLAB源代码_图像压缩

压缩还原后:

人工神经网络 (ANN) BPN数字图像压缩 MATLAB源代码_BP_02

 

 

代码文件下载地址:http://pan.baidu.com/share/link?shareid=127753&uk=2199844354

 

本文是原创文章,转载请保留原作者和出处信息。

由于个人水平有限,不足之处还望多多包涵,欢迎批评指正。

                                                                                                       By  aaron8967