


为什么在服务器上php里执行file_put_contents创建文件回到是false呢,文件目录已经设置为777的权限了,不解
为什么在服务器上php里执行file_put_contents创建文件返回是false呢,文件目录已经设置为777的权限了,不解
想用file_get_contents读取网络上的图片文件并由file_put_contesnts写入到服务器本地,file_get_contesnts执行没问题,可以读取信息,然后将读出的数据$data执行:
var_dump(file_put_contents($local_file_position, $data));在测试中会显示var_dump出的信息,但最后file_put_contents返回是false,不知道为什么,在网上搜说是aparche没有写权限,是这样的吗?如果是,如何设置呢?
环境:CentOS 6.5 + MySQL5.5.35 + tomcat7.0.47
------解决方案--------------------
用的是绝对路径还是相对路径?
------解决方案--------------------
先确认file_get_contents()获取的数据是否存在,数据类型是否合法。
------解决方案--------------------
估计是权限不够
echo substr(sprintf('%o', fileperms('你的目录')), -4); //看看是什么结果
------解决方案--------------------
要求能写入的话 文件所在的目录也是需要有写的权限的
比如 /home/centos/123.txt
目录home、centos也都是需要写的权限的不只是单单123.txt需要写权限
------解决方案--------------------
file_put_contents 失败时会有明确的错误信息输出
请贴出
------解决方案--------------------
提示很清楚了,
Permission denied
拒绝访问
------解决方案--------------------
你是要把文件写到 /usr/local/www/wxtest/txt 目录中
请检查目录存在且可写
------解决方案--------------------
1. 755权限指的是 rwxr-xr-x, 同组和其他用户是没有写权限的。 777才是全部用户拥有所有权限。
2. 建议使用绝对路径,写文件前在脚本内判断路径是否存在,不存在则先创建路径(目录)再写文件,这个是血淋淋的教训。
------解决方案--------------------
把/usr/local/www/wxtest/txt设成777
------解决方案--------------------
确定是对组外用户的权限是777?

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Backup and Recovery Policy of GitLab under CentOS System In order to ensure data security and recoverability, GitLab on CentOS provides a variety of backup methods. This article will introduce several common backup methods, configuration parameters and recovery processes in detail to help you establish a complete GitLab backup and recovery strategy. 1. Manual backup Use the gitlab-rakegitlab:backup:create command to execute manual backup. This command backs up key information such as GitLab repository, database, users, user groups, keys, and permissions. The default backup file is stored in the /var/opt/gitlab/backups directory. You can modify /etc/gitlab

Improve HDFS performance on CentOS: A comprehensive optimization guide to optimize HDFS (Hadoop distributed file system) on CentOS requires comprehensive consideration of hardware, system configuration and network settings. This article provides a series of optimization strategies to help you improve HDFS performance. 1. Hardware upgrade and selection resource expansion: Increase the CPU, memory and storage capacity of the server as much as possible. High-performance hardware: adopts high-performance network cards and switches to improve network throughput. 2. System configuration fine-tuning kernel parameter adjustment: Modify /etc/sysctl.conf file to optimize kernel parameters such as TCP connection number, file handle number and memory management. For example, adjust TCP connection status and buffer size

The CentOS shutdown command is shutdown, and the syntax is shutdown [Options] Time [Information]. Options include: -h Stop the system immediately; -P Turn off the power after shutdown; -r restart; -t Waiting time. Times can be specified as immediate (now), minutes ( minutes), or a specific time (hh:mm). Added information can be displayed in system messages.

Common problems and solutions for Hadoop Distributed File System (HDFS) configuration under CentOS When building a HadoopHDFS cluster on CentOS, some common misconfigurations may lead to performance degradation, data loss and even the cluster cannot start. This article summarizes these common problems and their solutions to help you avoid these pitfalls and ensure the stability and efficient operation of your HDFS cluster. Rack-aware configuration error: Problem: Rack-aware information is not configured correctly, resulting in uneven distribution of data block replicas and increasing network load. Solution: Double check the rack-aware configuration in the hdfs-site.xml file and use hdfsdfsadmin-printTopo

The key to installing MySQL elegantly is to add the official MySQL repository. The specific steps are as follows: Download the MySQL official GPG key to prevent phishing attacks. Add MySQL repository file: rpm -Uvh https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm Update yum repository cache: yum update installation MySQL: yum install mysql-server startup MySQL service: systemctl start mysqld set up booting

CentOS will be shut down in 2024 because its upstream distribution, RHEL 8, has been shut down. This shutdown will affect the CentOS 8 system, preventing it from continuing to receive updates. Users should plan for migration, and recommended options include CentOS Stream, AlmaLinux, and Rocky Linux to keep the system safe and stable.

Building a Hadoop Distributed File System (HDFS) on a CentOS system requires multiple steps. This article provides a brief configuration guide. 1. Prepare to install JDK in the early stage: Install JavaDevelopmentKit (JDK) on all nodes, and the version must be compatible with Hadoop. The installation package can be downloaded from the Oracle official website. Environment variable configuration: Edit /etc/profile file, set Java and Hadoop environment variables, so that the system can find the installation path of JDK and Hadoop. 2. Security configuration: SSH password-free login to generate SSH key: Use the ssh-keygen command on each node

Complete Guide to Checking HDFS Configuration in CentOS Systems This article will guide you how to effectively check the configuration and running status of HDFS on CentOS systems. The following steps will help you fully understand the setup and operation of HDFS. Verify Hadoop environment variable: First, make sure the Hadoop environment variable is set correctly. In the terminal, execute the following command to verify that Hadoop is installed and configured correctly: hadoopversion Check HDFS configuration file: The core configuration file of HDFS is located in the /etc/hadoop/conf/ directory, where core-site.xml and hdfs-site.xml are crucial. use
