The following are the primary reasons why a core file might not be generated (This list pertains to Solaris OS and Linux, unless specified otherwise):
The current user does not have permission to write in the current working directory of the process.
The current user has write permission on the current working directory, but there is already a file named core that has read-only permission.
The current directory does not have enough space remaining.
The current directory has a subdirectory named core.
The current working directory is remote. It might be mapped by Network File System (NFS) and NFS failed just at the time the core dump was about to be created.
Solaris OS only: The coreadm tool has been used to configure the directory and name of the core file, but any of the above reasons apply for the configured directory or filename.
The core file size limit is too low. Check your core file limit using the ulimit -c command (Bash shell) or the limit -c command (C shell). If the output from this command is not unlimited, the core dump file size might not be large enough. If this is the case, you will get truncated core dumps or no core dump at all. In addition, ensure that any scripts that are used to launch the VM or your application do not disable core dump creation.
The process is running a setuid program and therefore the operating system will not dump core unless it is configured explicitly.
Java specific: If the process received SIGSEGV or SIGILL but no core dump, it has possibly been handled by the process.
HotSpot VM uses the SIGSEGV signal for legitimate purposes, such as throwing NullPointerException, deoptimization, and so forth. The signal is unhandled by the Java VM only if the current instruction (PC) falls outside Java VM generated code. These are the only cases in which HotSpot dumps core.
What can we do to improve this information (2000 or fewer characters)